Cybersecurity & Tech

AI Won’t Automatically Make Legal Services Cheaper

Justin Curl, Sayash Kapoor, Arvind Narayanan
Thursday, February 12, 2026, 5:00 AM
Three bottlenecks between AI capability and access to justice.
(https://usawhistleblowers.com/news/whistleblowing-technology/; CC BY-NC 4.0, https://creativecommons.org/licenses/by-nc/4.0/).

Despite widespread predictions that artificial intelligence (AI) will transform legal services and expand access to justice, advanced AI will not, by default, help consumers achieve desired legal outcomes at lower costs. Three bottlenecks stand between AI capability advances and more accessible legal services.

First, unauthorized practice of law regulations and entity-based restrictions may prevent consumers from accessing AI capabilities and deter experimentation in how legal services are delivered. Second, the adversarial structure of American litigation means that when both parties adopt productivity-enhancing technologies, competitive equilibria simply shift upward. The history of discovery digitization is instructive: Rather than reducing costs, parties exploited the explosion of digital documents to impose greater burdens on opponents. Third, even where AI dramatically reduces the cost of legal tasks, the speed of human decision-makers—judges resolving disputes, lawyers understanding contracts—places an upper limit on acceleration without sacrificing adequate oversight.

The legal industry’s response will determine whether AI improves access and efficiency or merely makes producing legal work cheaper without improving the outcomes clients actually care about. This report surveys reforms addressing each bottleneck, including regulatory sandboxes, judicial case management innovations, and expanding arbitration options.

You can listen to a conversation with the author here.

You can read the paper here or below:

 


Justin Curl is a J.D. candidate at Harvard Law School currently serving as the Technology Law & Policy Advisor to the New Mexico Attorney General. He's interested in technology and public law, with a research agenda focused on algorithmic bias (14th Amendment), binary searches (4th Amendment), and judicial use of AI. Previously, he was a Schwarzman Scholar at Tsinghua University and earned a B.S.E. in Computer Science magna cum laude from Princeton University.
Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy and a co-author of the book "AI Snake Oil."
Arvind Narayanan is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil, the essay AI as Normal Technology, and a newsletter of the same name which is read by over 60,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan was one of TIME's inaugural list of 100 most influential people in AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).
}

Subscribe to Lawfare