-
Chinese Mobile App Encryption is Suspiciously Awful
The latest edition of the Seriously Risky Business cybersecurity newsletter, now on Lawfare. -
Why OpenAI’s Corporate Structure Matters to AI Development
OpenAI's potential corporate shift from its “capped-profit” model may conflict with its AGI-for-humanity mission. -
AI Agents Must Follow the Law
Before entrusting AI agents with government power, it’s essential to verify that they’ll obey the law—even when instructed not to. -
Lawfare Daily: Cullen O’Keefe on the Impending Wave of AI Agents
What are AI agents and how do we ensure they operate safely? -
ChinaTalk: Ezra, Derek, and Dan Wang on Abundance and China
-
1,000 AI Bills: Time for Congress to Get Serious About Preemption
If this growing patchwork of parochial regulatory policies takes root, it could undermine U.S. AI innovation. -
Lawfare Daily: Ben Brooks on the Rise of Open Source AI
What are the ramifications of the shift to open source AI? -
It's Like Signal, but Dumb
The latest edition of the Seriously Risky Business cybersecurity newsletter, now on Lawfare. -
From Budapest to Hanoi: Comparing the COE and UN Cybercrime Conventions
Will the U.S. government sign on to the new and controversial UN Cybercrime Convention? -
ChinaTalk: America's R&D Reckoning
-
Lawfare Daily: Digital Forgeries, Real Felonies: Inside the TAKE IT DOWN Act
What is the TAKE IT DOWN Act? -
AI-Enhanced Social Engineering Will Reshape the Cyber Threat Landscape
The proliferation of artificial intelligence tools enables bad actors to conduct deceptive attacks more cheaply, quickly, and effectively.