Cybersecurity & Tech

Lawfare Daily: Josh Batson on Understanding How and Why AI Works

Kevin Frazier, Josh Batson, Jen Patja
Friday, May 30, 2025, 7:00 AM
Discussing the significance of interpretability and explainability. 

Published by The Lawfare Institute
in Cooperation With
Brookings

Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Contributing Editor at Lawfare .
Johsh Batson is a research scientist at Anthropic.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.
}

Subscribe to Lawfare