Mapping the Regulatory Landscape for New Technologies

Sarah Kreps
Sunday, May 26, 2024, 9:00 AM
Managing the risks and opportunities from AI, quantum computing, and other emerging technologies will involve more than congressional legislation.
President Joe Biden meets with a bipartisan group of senators to discuss artificial intelligence at the White House on Oct. 31, 2023. Photo credit: Official White House Photo by Adam Schultz via Flickr/Public Domain.

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: The tremendous potential of artificial intelligence has raised fears that it could lead to severe privacy invasions, subvert democratic processes, and trigger accidental wars, among other dangers. In such circumstances, calls for congressional action and regulation are inevitable. Cornell University’s Sarah Kreps argues that Congress is often ill-suited to regulate the tech industry and that there are other, often better, ways to reduce the risks that AI and other fast-changing technologies may pose.

Daniel Byman

***

In 1950, mathematician Alan Turing wrote a paper that investigated the possibility of machines making decisions like humans. He led with a question—“can machines think?”—and proceeded to unpack the meaning of both “machine” and “think.” He gave an example of a request that a human would pose to a machine: “Q: Please write me a sonnet on the subject of the Forth Bridge.” And he imagined the machine’s deferring: “A: Count me out on this one. I never could write poetry.”

In 2022, OpenAI released a tool that could write an elegant sonnet—or 100 different sonnets—on the Forth Bridge. Public uptake was swift. ChatGPT became the fastest app to reach 100 million users, even beating TikTok in its ascent.

Criticisms of the pace of congressional action on artificial intelligence (AI) regulation were also swift. Brookings Institution fellow Darrell West said that Congress was “way behind on AI regulation.” Others chimed in and observed that “the fact remains that Congress has yet to pass any legislation on AI, allowing the U.S. to cede the initiative on this issue to the European Union (EU), which recently agreed on the AI Act, the world’s most comprehensive AI legislation.”

The gradual and then sudden rise of AI and debate about legislative speed on acting (or not acting) has raised important questions about tech policy: What is the role of Congress in regulating new technologies, and if not Congress, who does regulate, and what is “enough” regulation?

The answer is that it depends. But at the least, the assumption that Congress bears the responsibility to regulate new technologies is inconsistent with its expertise and incentives and, not surprisingly, the experience and history of tech regulation. Revisiting technologies from nuclear weapons to AI and through social media and crypto shows that only rarely, when technologies (like nuclear weapons) are so novel that they lack any statutory precedent, does Congress pass new laws to regulate new technology. But inaction on lawmaking does not mean that regulation languishes. The past and present of tech policy shows that the conventional wisdom that puts the regulatory responsibility squarely at the feet of Congress ignores both the incentives of practice of the larger set of regulatory actors—scientists or innovators themselves, executive branch agencies, the media, investors, and even the public—who influence, rein in, and moderate the arc of new technologies.

Regulation via Fire Alarms

Scholars have argued that Congress has long engaged in “fire alarm oversight,” setting up procedures and practices that allowed the public and interest groups opportunities to evaluate administrative decisions and pull the fire alarm when they saw evidence of executive overreach. With goals of reelection and the realities of multiple, complex issues to cover, “Instead of sniffing for fires, Congress places fire-alarm boxes on street corners, builds neighborhood fire houses, and sometimes dispatches its own hook-and-ladder in response to an alarm.”

Part of the logic for this approach on tech policy is that the expertise gap between Congress and technical experts is enormous, making it difficult to legislate wisely. New technology is the product of Ph.D.s whose expertise is often narrow but advanced. As Nobel physics laureate Richard Feynman said in a lecture, “I think I can safely say that no one understands quantum mechanics.” Several decades later in a quantum hearing in the House, Rep. Adam Kinzinger (R-Ill.) said to a witness that “I can understand about 50 percent of the things you say.” In the 115th Congress, only 7 percent of Congress reported having a STEM background.

In addition, Congress has not prioritized legislation because the public, in general, tends not to pull the fire alarm on tech issues. For the past two decades, Americans have invariably cited the economy as their top concern when Gallup has asked about the country’s “most important problem.” This has been followed by government (polarization), the situation in Iraq, COVID-19, unemployment, terrorism, and immigration—nowhere has the public noted anything tech related. The public tends not to have the specialized expertise, let alone the bandwidth, to adjudicate scientific risk the way they might weigh policies like education that directly affect their families.

Congress’s lack of action isn’t necessarily a cause for concern. Even if technologies are new, their regulatory frameworks can often rely on existing statutes, which means new laws are not always needed for new technologies. Nuclear weapons presented a truly novel weapon for which there were no relevant existing laws, which is why Congress had to pass the 1946 Atomic Energy Act that gave civilians control over nuclear weapons and established the Atomic Energy Commission to control and manage the new technology. But AI and other new technologies are better seen as new wine that can fit in older bottles.

Elected officials, then, tend to have few incentives to take up and use political capital to pass legislation on new technology, which explains why they rarely do. Mark Zuckerberg has repeatedly testified about social media content moderation on Capitol Hill, but Congress has never passed federal legislation on this issue.

Members of Congress carp about the 1996 Communications Decency Act when it comes to moderating a technology (social media) that did not exist anywhere close to its current form in 1996, but it has proved more resilient and preferable to any alternatives. The 1976 Copyright Act, which certainly predated generative AI, has emerged as the legal framework for adjudicating fair use of prior written content, with four specific statutory factors used to make that determination about whether or not the training process for AI models infringes on copyright law.

In short, Congress has neither the knowledge nor the incentive to create new laws, not least because in a lot of cases, older laws can actually apply to new technologies. Even still, the absence of new laws does not mean that new forms of regulation do not emerge. One just has to look elsewhere.

Non-legislative Actors and Their Regulatory Reach

Tech opens up far more direct nongovernment regulatory channels because many technologies, even those with a significant national security focus or applications like AI, are consumer facing. The public—individuals and investors—plays a role when it interacts directly with these technologies, acts as beta testers for nascent technologies, and provides feedback that developers reject at their peril. But the public, like legislators, can also have an expertise or awareness gap, and specialized journalists, from bloggers to organized media outlets, play a crucial role as interlocutors that surface tech mis- or oversteps. Their reporting percolates through to the public and also to legislators who in turn put pressure back on developers. Finally, the developers themselves are key actors in this regulatory picture. They have financial incentives to innovate aggressively or naïvely in ways that can lead to harm (for example, foreign election interference via social media) but can also innovate responsibly, sometimes pushed from within the company, sometimes through breakaway firms, and sometimes learning from previous mistakes.

The public uses social media platforms like Facebook, and Meta is a publicly traded company, which creates strong incentives for the company to be responsive to public concerns. Even if they do not initially release products compatible with user preferences, they are subject to market pressures not to reject the public’s preferences. In 2007, Facebook released a feature called Beacon that would allow Facebook to monitor and record user activity on third-party sites. Facebook would not only collect data on that activity but also send a news alert to users’ friends about purchases the user had made, without including an opt-out. The feature triggered an online petition (signed by almost 70,000 people) objecting to Beacon, class-action lawsuits, and demands that users be able to opt out of the program.

A couple weeks later, Facebook established a privacy control that allowed users to turn Beacon off completely, and Zuckerberg changed the policy to opt-in and posted an apology on the Facebook site. “We’ve made a lot of mistakes building this feature, but we’ve made even more with how we’ve handled them,” he wrote. “We simply did a bad job with this release, and I apologize for it …. I’m not proud of the way we’ve handled the situation and I know we can do better.” As part of the class-action settlement, Facebook not only shuttered Beacon but also paid a $9.5 million settlement fund for an independent foundation that would promote online privacy and safety. None of the action and reaction in response to the product overreach took place within the context of legislative or even government regulatory frameworks. It was individual users who organized to influence the platform combined with investors (in this case advertisers) who pulled their financial support and pulled the strings of influence within the company.

The public’s expertise gap persists but is offset, in part, by a small set of users who have considerable expertise and emerge as interlocutors. In the Beacon case, the tech guru interlocutor was Stefan Berteau, who had been testing the privacy settings of Facebook and uncovered Beacon, then publishing his findings. Users had been unaware of these privacy concerns, and the post gained attention and started the movement to halt Beacon’s implementation because it made this technical information accessible and comprehensible to a general audience.

Individuals with niche knowledge about new technologies and who write for journalistic outlets are best positioned to expose tech missteps and initiate cascades that lead to change. Given the expertise asymmetries inherent in new technologies, tech journalists have played an important role in translating their significance to general audiences. Not only do they have the expertise to understand the technology, even if self-taught, but they also have the contacts and networks to get scoops that can act as fire alarms that reverberate through the broader technology ecosystem. For example, it is difficult to beat Coindesk for coverage of cryptocurrency scandals. In 2014, after the first Bitcoin exchange, Mt. Gox, was hacked and stopped allowing withdrawals, one individual who had funds in the exchange turned to both the established Wall Street Journal and Coindesk—then a scrappy new crypto outlet—to publicize the concern. The media scrutiny eventually forced the exchange owner to confess the truth: that the exchange had been hacked and the Bitcoin was gone. Coindesk had helped surface the scandal, and its reporting contributed to crypto’s reputation as a vehicle for illicit activity and sparked more robust security measures on exchanges.

Lastly, the impetus for innovation comes from scientists and developers, who determine not just the original iteration of an idea but follow-on adjustments. Many of the most disruptive innovations have emerged as a result of arms race dynamics that pushed innovation cycles faster than regulatory feedback could be incorporated. Firms compete fiercely for talent and investment dollars, sometimes releasing products prematurely because of these market pressures. Early users of Google’s Gemini, an AI image generation model, were shocked by the historically inaccurate and sometimes offensive content it produced. The model was pulled offline and Google’s co-founder, Sergey Brin, admitted that the company had “messed up … mostly due to just not thorough testing.”

The geopolitical and financial push behind innovations is strong, but so are the social norms pushing back against technology, which have manifested not just among the public but in the tech firms themselves. The “techlash,” or backlash against large tech companies, is real. Facebook was the face of, and in the crosshairs of, that techlash. In March 2019, an NBC News/Wall Street Journal poll found that only 6 percent of people trusted Facebook either “a lot” or “quite a bit,” with 2 percent reporting that they trusted Facebook “some” and 60 percent “not at all.” This low trust in tech goes beyond social media and its role in election interference and political violence abroad. In the United States, trust in the tech sector as a whole declined to 57 percent in 2021, a more than 20-point drop from 2012.

AI has emerged against the backdrop of a broad-based techlash, and the question of trust and responsibility has been a current in its development. In 2021, OpenAI employees left the company to found Anthropic and implement a vision that they believed focused on “safety and controllability” and could provide a “constitutional AI” model based on basic principles of human fairness. Anthropic’s model is not open source, but the company publishes research and findings that create transparency around model training and intended “harmlessness” and “helpfulness” measures. Microsoft has an entire unit dedicated to trustworthy AI. Four of the biggest players in generative AI—Anthropic, Google DeepMind, Microsoft, and OpenAI—voluntarily established an institution called the Frontier Model Forum to help establish responsible development norms, support research on AI risk, and create new governance frameworks. Cynics could interpret these examples as Big Tech virtue signaling—instances of disingenuously articulating a responsible position while racing toward an unknown future. But the hivemind—the larger community of the public, investors, and government regulators—can help offer an ethical compass about those ethical red lines in ways that were absent with the far more closed, insular process of nuclear weapons and social media development.

Regulation Under Uncertainty

The relevance of previous laws and pressure from the public, knowledgeable journalists, and expert insiders do not exonerate Congress from having a regulatory function. But regulation is more than just new laws: Hearings on technology serve a useful purpose for both members of Congress and the public to learn about complex technologies, and the provenance of regulation goes well beyond legislative action.

Skeptics—and technological “doomers,” in particular—might wonder whether this more diverse ecosystem is sufficient to prevent the development of potentially cataclysmic systems. Scholars of nuclear strategy and policy use the term “firebreaks” to describe the boundaries that make it difficult for conflict involving conventional weapons to escalate to nuclear weapons. These can include operational barriers, like separating conventional and nuclear forces, but also psychological barriers stemming from a mental and ethical aversion to employing more lethal and more taboo weapons. These firebreaks could emerge because there were actual technical differences between conventional and nuclear weapons. The line between conventional and nuclear marked a transition to something new and manifestly different. There’s no equivalent for AI. No one can demonstrably say when a machine becomes “intelligent” or agree on what artificial general intelligence would be. There are no clear thresholds that regulators could apply, no AI firebreak to demarcate the line between a useful technological tool and something too radical or dangerous to develop or use.

The risks are uncertain, and AI might never present the same type of existential risk as nuclear weapons. How a country finesses its regulatory framework to stay short of a nebulous line in AI will depend on where its private sector, mass public, and public sector stand on technological risk. In the United States, which leads globally in AI, risk acceptance has led to innovation and the resources to double down on innovation. This differs sharply with AI development in the European Union, where legislators have been more aggressive about leaning in to new AI laws, eliciting praise from some and criticism from others, including company executives worried that legislation will “jeopardise Europe’s competitiveness and technological sovereignty.” Where countries sit economically and geopolitically is ultimately where they stand when it comes to thinking about tech regulation.

The science of new technologies—quantum, AI, and others in the future—may prove to be the easy question. The art of knowing when that technology has gone too far may prove to be more difficult. But mapping the regulatory landscape and the ways in which it goes beyond the obvious government actors is a step toward understanding how that determination is made.


Sarah Kreps is the John L. Wetherill Professor of Government, adjunct professor of law, and director of the Tech Policy Institute at Cornell University. She is also a nonresident senior fellow at the Brookings Institution and a member of the Council on Foreign Relations. She is currently finishing a book on the policy responses to emerging technologies from nuclear weapons to artificial intelligence.

Subscribe to Lawfare