Algorithmic Optimism, Democratic Reality
A review of Bruce Schneier and Nathan E. Sanders, “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship” (The MIT Press, 2025).
Published by The Lawfare Institute
in Cooperation With
It is telling that when admitted artificial intelligence (AI) optimists write a book about how AI can have a significant positive impact on democracy, they want their readers to know they did not use AI to help them write their book. That is exactly what Bruce Schneier and Nathan E. Sanders do in “Rewiring Democracy”: “While we see many useful applications of AI, we wrote this book ourselves. All the ideas and words are our own or stemming from those we have cited or acknowledged.” It’s a revealing disclaimer. If AI holds as much promise to enhance productivity, creativity, and fairness as the authors insist, why not enlist it in the act of authorship? Implicitly, Schneier and Sanders’s disclaimer suggests that despite AI’s speed and scope, there remains something distinctly human—perhaps even superior—about thought unassisted by AI.
Schneier and Sanders clearly have the credentials to write this book. Schneier has written over a dozen volumes, testified before Congress, and served on government committees. A leading figure in cryptography and cybersecurity for several decades, Schneier has understandably been dubbed a “security guru” by The Economist. Sanders, who has a doctorate in astronomy and astrophysics, has helped build and lead data science teams at WarnerMedia and Legendary Pictures and has built applications for participatory oversight of environmental regulation and developed statistical methods for public health analysis. Both authors have significant experience in industry, in academia, and in working directly with communities trying to navigate technological challenges.
“Rewiring Democracy” is an important book. It provides an encyclopedic compilation of the ways in which AI is currently being used and can be used by the executive, legislative, and judicial branches. Part One introduces us to AI and democracy. Part Two discusses the ways politicians are using AI to win elections. Parts Three through Five address how each of the three branches of government can use AI. Part Six looks at how citizens can use AI to hold government accountable. And Part Seven—sadly the book’s shortest part—provides guidance on how to make AI safe for democracy.
Much of the book follows a repeated pattern: First, Schneier and Sanders uncritically lay out pro-democratic ways that people and governments are using or could use AI. Then they point out the ways in which those same uses can be turned on their head to serve undemocratic goals. Schneier and Sanders helpfully describe the general advantages of AI over humans as coming in “four dimensions: speed, scale, scope, and sophistication.” But following their regular pattern, those benefits are followed by corresponding harms: “AI affords governments the capability to increase the speed, scale, scope, and sophistication of bias, discrimination, and exploitation.” Concentrating on the United States, but citing examples from Argentina to Sweden, the book’s short, easily digestible chapters are intended for a broad audience, including AI novices. Anyone working in policy-heavy professions will find the book fascinating. Anyone looking to develop the next great AI products will find a wealth of ideas.
A good example of the book’s pattern of juxtaposing pro-democratic and anti-democratic uses of the same AI tools appears in Chapter 17, “Negotiating Legislation.” Schneier and Sanders walk through how AI might help legislators not only write bills but also determine to whom lobbyists should direct campaign donations to help get the bills passed. They envision a world in which each legislator has “their own AI negotiation assistant” to help each legislator negotiate with many other legislators at the same time. AI could then help with revision of the legislative language to ensure that it emerges free from contradictory provisions. They describe how some governments have used AI to review large bodies of legal code to make them more concise and understandable. Yet, after extensively describing all the benefits AI could bring to the legislative process, they end the chapter with the reality: “AI will tend to concentrate political influence among the powerful elite unless democracies take steps to inhibit that outcome.” In other words, AI-assisted lawmaking will likely be the status quo on steroids.
The authors are generally charitable toward the technology itself. They catalog products and deployments without evaluating how well they actually perform. Much of what they describe hovers between aspiration and experiment, and their belief in the promise of AI—for example, when describing how it could be used to crystallize legislative intent for judges—outpaces how it could actually be used considering the complexity of human decision-making.
It is clear that AI can do more, faster, but to what end if the results are not better? Schneier and Sanders recognize that each AI-enabled product is a reflection of its creators. Just as human beings are biased, the AI that human beings create will be biased: “AIs may be more or less biased than their human counterparts; they may be biased in different ways, and to different degrees.”
The authors touch on how AI is likely to eliminate large numbers of jobs: “Unfortunately, history has taught us time and again that workers tend not to benefit when productivity increases. Often, those gains are captured exclusively by a wealthy few.” In their characteristically detached tone, they are describing the obscene wealth that the most prominent AI leaders are currently realizing, while many workers are concerned about how AI could take their jobs.
The authors also touch on how AI might solve some widespread problems in reaching compromises. In their chapter on arbitration, for example, they describe a world where whenever businesses “have a disagreement, no matter how minor, they could engage an AI arbitrator. Each could relate their side of the story, and obtain a resolution seconds later. ... The involved could do this a dozen times a week if they needed to.” But, as with everything else involving AI, this simply becomes a game where the winner is decided by the person who trained the AI: “Any company that provides AI arbitration services would have the same incentives to favor powerful entities.” Their solution is to allow the loser to appeal ... to a human.
Where the book struggles, and don’t we all, is in prescribing remedies for the danger that AI will be used to make the distribution of power and wealth ever more unequal. The authors offer only faint praise of the EU AI Act without digging into how it is one of the only substantive starting points the world has to regulate AI companies: “The EU should be lauded for acting where other governments have not, but criticized for its weak protections for human rights and ample loopholes for companies to do as they please.” Does anyone really believe that the U.S. government is going to enact anything even remotely as comprehensive as the EU AI Act? Given the timing of this book, post-Department of Government Efficiency (DOGE) and President Trump’s executive order on AI, which are mentioned only in passing, the authors still include recommendations that seem to have been made obsolete by recent events.
In Part Seven, Schneier and Sanders’s first principle for truly democratic AI—“AI must be broadly capable”—seems like an odd place to start given that everyone is already building broadly. Both more established companies and start-ups are trying to make AI capable of doing everything. Their second principle, making AI “widely available,” seems right but is just as unlikely to be achieved as it has been with so many other resources in America, where the gulf between haves and have nots widens by the day. Twenty-six million Americans do not even have access to high-speed broadband. Given that the other 314 million Americans do have access, isn’t it already widely available? It would be more meaningful if Schneier and Sanders recommended something like universal availability.
Their third suggested solution, “transparency,” has become cliché. What is the point of making something transparent when we would all fail the open book test—including the computer science engineers who built it: “Even engineers who build AI systems can’t explain why they produce the output they do.” Schneier and Sanders clearly understand the kind of transparency that would actually be helpful, as they describe earlier in the book how decisions must be made as systems are built: Knowing that nothing is perfect, do we build systems giving more weight to avoiding false positives or false negatives? But simply calling for transparency without explaining the specifics does not seriously move the conversation forward. What does transparency look like in a competitive market? Does every code and every data source need to be available for everyone to study and test?
The next three principles, “meaningfully responsive,” “actively debiased,” and “reasonably secure” are oddly described with limiting or wiggle adverbs. For example, they don’t define what it means to be “actively” debiased. It is unclear why they don’t just suggest that AI be “responsive,” “debiased,” and “secure.” Their final principle is that AI must be “nonexploitative” and must not steal others’ work for training data or barely pay workers from around the world to annotate images. But hasn’t this ship largely sailed, as shown by AI companies’ judgments that the potential profit is so great that billion-dollar settlements are entered into without a serious fight?
The authors rightfully note that it is tech oligarchs who are really making all the decisions (while claiming that they are building responsibly) and how dangerous and undemocratic this is. But isn’t this true of all technology? “[P]ast technologies like the internet have hardly dented the competitive advantage that wealth and privilege provide to candidates, so there is reason to be skeptical that AI will.” Put even more succinctly later, they write: “AI allows all parties to use their resources more efficiently, and this especially benefits those with the most resources.”
But what is a realistic solution? These tech oligarchs are taking the world screaming down a double black diamond ski slope with no helmet, goggles, or poles. The tech oligarchs speak out of both sides of their mouths. They claim they want federal regulation when they know that Congress will never agree on regulation. And then when states attempt to fill the void of federal AI regulation with their own, in an attempt to make sure that companies build responsibly, U.S. tech leaders say that these state regulations will give victory to China. They know the Trump administration not only has no interest in regulating them, but is aggressively trying to prevent anyone from doing so. The tech oligarchs clearly don’t want anyone to create any regulations pushing them to act responsibly. Their argument that regulation will lead to China’s winning the AI race is hyperbolic, at best. The best AI for humankind will be built responsibly, from accurate data and truthful information.
Ultimately, Schneier and Sanders offer less of a road map than a mirror. They show us how optimistic visions of technological progress revert to human struggles over power, fairness, and truth. AI will not save democracy unless democracy first saves itself—by insisting on accuracy, accountability, and moral courage from those who wield both algorithms and authority. In the end, the book is an urgent warning: Governments must make hard choices about AI governance, or else those choices will be made for them by private actors. Given the accelerating pace of AI adoption, the luxury of delay may already be gone.
