Cybersecurity & Tech Democracy & Elections Surveillance & Privacy

From Fake News to Fake Views: New Challenges Posed by ChatGPT-Like AI

Nikolas Guggenberger, Peter N. Salib
Friday, January 20, 2023, 8:16 AM

An infinite supply of plausible opinions from fake, AI-powered pundits threatens to crowd out genuine discourse.

San Francisco's Pioneer Building, home to OpenAI, creators of ChatGPT. (HaeB, https://tinyurl.com/526ddtr2; CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Just over a month ago, OpenAI released ChatGPT, a text-producing artificial intelligence (AI) application. ChatGPT is impressive. It can draft unique, plausible, college-level essays on topics ranging from economics to philosophy—in iambic pentameter if asked. At least without specialized tools, the AI’s writing is indistinguishable from that of a reasonably articulate, competent human, and it can produce such content across various domains. 

ChatGPT is obviously a powerful new tool that has the potential to do much good in many sectors. But there will also be downsides. Among them, AIs like ChatGPT will be able to produce a limitless and practically free supply of “opinions” on anything. These fake opinions will supercharge online influence campaigns and erode another pillar of trust in liberal discourse: the very idea that our counterparts online are real people.

In 2015, the small-d democratic problem with online speech was disinformation and misinformation. Fabricated news stories claiming that, for example, Hillary Clinton helped to run a child trafficking ring out of a Washington, D.C., pizza shop spread like wildfire. These and countless following lies fed directly into a political movement that culminated in the Jan. 6, 2021, insurrection attempt. No doubt, addressing deliberate disinformation is a difficult problem—especially when these lies come from the highest-ranking political officials in the country. Combating disinformation raises constitutional questions, and responses necessarily contain difficult trade-offs around free speech. Yet traditional media has updated its approach to covering disinformation, and social media has become more successful in identifying, labeling, and suppressing false information. Twitter’s recent, more lenient, approach to misinformation and particularly right-wing propaganda does not reverse the general trend. Sprawling research into disinformation and misinformation after the 2016 U.S. presidential election has contributed to a better understanding of howinformation consumption habits are changing, and digital media literacy is making its way into classrooms. Some studies show that younger people fare better at distinguishing facts from fiction. Altogether, the developments over the past years give some reason for hope. 

Fake opinions, shared as a part of what looks like human dialogue, however, are another story entirely. Perhaps the most effective strategy for making someone believe something is convincing them that most other people like them already believe it. Propagandists understand this.

In 2016, the Russian government engaged in systematic campaigns to “astroturf” disruptive views. Astroturfing is a strategy for manufacturing the appearance of consensus by posting the controversial opinion over and over, in many places, from many accounts. Astroturfing circa 2016, however, was often either ineffective or expensive. Simple bots could copy and paste more-or-less identical social media posts over and over from newly created accounts. These bots’ repetitive content and suspicious posting patterns made them comparatively easy for social media platforms to identify and ban, even if doing so sometimes took time. The more effective but expensive strategy was to pay an army of real humans to maintain long-term social media accounts that liked, shared, and posted on a wide range of topics. Such influence accounts were much harder to detect. They looked like real users because they were. Over time, then, these naturalistic accounts could begin to mix in the opinion to be astroturfed without much fear of being banned. Their activity would be indistinguishable from that of a real human with a handful of controversial views.

ChatGPT, or similar AI, will make effective astroturfing practically free and difficult to detect. Unlike the bots of yesteryear, such AIs need not spam near-identical copy-pasted statements of fringe views. They can mimic humans, producing an infinite supply of coherent, nuanced, and entirely unique content across a range of topics. Just try asking ChatGPT to write an argument for arming schoolteachers or raising the minimum wage. A Facebook account run by such an AI might post daily recipes or stories about its dog. It might do this for months or years before beginning to sprinkle in, say, opinions to the effect that Ukraine is to blame for Russian aggression against it. Such accounts can and will be created by the millions at almost no cost. The bot accounts will not only post proactively but also react to other users’ posts and engage in long-term conversations. They can be programmed to look out for certain keywords, such as “Ukraine,” “Nato,” or “Putin,” analyze the past contributions, and produce a plausible reply.

The bottomless supply of practically free opinions could crowd out real—that is, human—contributions. Although propagandists have already relied on flooding the zone as a 21st-century version of censorship, ChatGPT-like AI will elevate that strategy to the next level. Once bot pundits become indistinguishable from humans, humans will start to question the identity of everyone online, especially those with whom they disagree. 

To be sure, ChatGPT has plenty of limitations, some of which have drawn ridicule. Cognitive scientist Gary Marcus, who holds great hopes for AI, identifies two core problems: neural networks are “not reliable and they’re not truthful.” This, Marcus argues, results from the type of reasoning on which the large language models rely. And he is not alone. Leading Google AI researchers acknowledged in a recent paper that scaling the models alone will likely not do the trick. Size alone brings little “improvements on safety and factual grounding.” Journalist and pundit Ezra Klein, in a recent episode of the Ezra Klein Show, put it more drastically: The current systems “have no actual idea what they are saying or doing. It is bullshit [] in the classic philosophical definition by Harry Frankfurt. It is content that has no real relationship to the truth.” Much of the skepticism and criticism boils down to comparisons with humans, and, indeed, large language models do not think like we do.

These limitations of current AI technology, however, do not stand in the way of sowing distrust. In some respects, they may even be features, not bugs. Random users arguing on Twitter are not famous for the accuracy and sophistication of their contributions. Mistakes, inaccuracies, and even made-up sources may give the machines a human touch. What counts is plausibility, and the current version of ChatGPT is surely not less capable of producing plausible content than the average Twitter user. A C- college essay more than does the trick. 

Social media companies will no doubt continue to try and discover and ban these bot accounts. But given current capabilities, sorting humans from AIs will be difficult—at least at scale. As such accounts become more and more human-like, it will be harder to shut them down without accidentally censoring real people, too. The choice may thus be between substantial underenforcement—letting lots of bots run wild—or substantial overenforcement—banning most bots and lots of humans, too. Perhaps new approaches—probably themselves algorithmic—will emerge that improve platforms’ ability to play spot-the-bot. But even then, the chatbots will continue to evolve to avoid detection, and public discourse could become a battle of the algorithms, with computing power defining the marketplace of ideas.

Another possible solution might be to treat opinions more like lies—deemphasizing or removing the most troublesome content, irrespective of whether a human or a machine generated it. This is a dangerous game from the perspective of free speech. Even if one believes that a narrow range of views—say, overt racism—ought to be banned online, removing just those will do little to stop AI-driven astroturfing. Human-like bots need not be overtly racist to wield worrying influence. They will flood the field with views about which politicians to vote for, what policies to support, and which geopolitical foes are and are not worth standing up to. And speech on such topics—even speech with which one vehemently disagrees—is at the core democratic debate.

Yet another solution could involve reforming the business model of social media. Today, major social media platforms earn most of their money from advertisements. That model involves zero-cost access, which not only allows bot accounts but also poses little to no barriers to entry. Such a model can in fact thrive on bots—engagement with bot-takes drives revenue as effectively as engagement with human ones. Platforms may even profit directly by showing advertisements to bot accounts. Subscription-based business models without advertisements—think Netflix or Spotify Premium—could change the calculus. Creating and maintaining a bot account would no longer be free. Thus, Twitter’s $8 per month “blue checks for the masses” policy might turn out to have real benefits, as a bot filter, beyond improving Elon Musk’s personal balance sheet. 

Eight dollars a month may not seem like a high price tag for, say, Russia to influence a foreign election. But the cost would be higher than it first appears. First, to work well, astroturfed opinion bots need to operate at scale. Twitter has admitted it found and removed over 50,000 Russian bot accounts created just to influence the 2016 U.S. presidential election. There were surely many more. Granted, $400,000 a month (the cost of 50,000 bots on Elon Musk’s $8/month Twitter) is a trivial sum for a nation-state. But the price of a bot account under the subscription model is higher than the top-line dollar figure. Account holders would have to supply a method of payment, like a credit card, which would be subject to “know your customer” rules. This could operate as a weak “proof of humanity.” Indeed, Twitter and other social media sites could limit the kinds of payment that they would accept to those usually requiring some minimal identification. That is, accept credit and debit cards, but no prepaid cards or Bitcoin. They could likewise limit the number of simultaneous or consecutive accounts created using a single method of payment. While none of these measures can guarantee to keep bots out, they would significantly increase the costs of running bot accounts.

Approaches like these carry significant risks, of course. First, they exclude people from discourse who lack either the necessary funds or digital means of payment. Second, the measures, if implemented successfully, would deliver a severe blow to anonymous discourse, with unforeseeable consequences for free speech. But the alternative world where social media was dominated by bots shouting at bots appears even less appealing.

Here is another, related but slightly different, solution. Governments could impose a Pigouvian tax on bot-generated comments. Pigouvian taxes attempt to quantify the externalized social harm of an undesirable practice and force the person engaging in it to bear the cost. The classic example is a carbon tax, levied on a per-ton basis, on emitters of carbon dioxide. One of us has argued elsewhere that such a tax would be a viable approach to regulating false speech on social media. It would work something like this. Once a year or so, regulators would audit social media sites using random sampling to determine how many impressions were of posts containing complete fabrications. Each false tweet would then incur a small tax—think cents, not dollars—equal to the estimated social harm of a marginal lie. In the same way, regulators could audit social media for bot accounts and impose a tax forcing them to internalize the harms of non-human posting. 

The disadvantages of this approach are obvious: It depends on a substantial regulatory apparatus making difficult estimates of social costs. But there are upsides, too. A Pigouvian tax on bots would be mandatory, sweeping in even social media companies that did not wish to switch to the subscription model. And by allowing some social media sites to remain free to users, the approach ensures that those without the ability to pay are not shut out of discourse. It likewise preserves users’ ability to speak anonymously on controversial or sensitive topics. Finally, the Pigouvian tax aligns social media companies’ incentives with the public’s. Astroturfing will cost them money, so they will try hard to prevent it. Contrast that with today’s incentives, where, as discussed above, bot-produced content can actually increase social media companies’ earnings.

Misinformation and so-called alternative facts have pushed public discourse and democracy to the brink. If untamed, pundit bots might go a step further. Regulators and social media companies should thus think carefully about how highly sophisticated bot-generated content will alter online discourse. Careful, well-balanced responses will allow society to reap the benefits of powerful new AI systems while blunting the harms.


Nikolas Guggenberger is Assistant Professor of Law at the University of Houston Law Center. He also holds an appointment at the Cullen College of Engineering’s Electrical and Computer Engineering Department. Guggenberger’s work focuses on antitrust, law & technology, privacy, and regulation. He has frequently advised government entities and served as expert witness on technology policy, financial markets regulation, and media law.
Peter N. Salib is an Assistant Professor of Law at the University of Houston Law Center and Affiliated Faculty at the Hobby School of Public Affairs. He thinks and writes about constitutional law, economics, and artificial intelligence. His scholarship has been published in, among others, the University of Chicago Law Review, the Northwestern University Law Review, and the Texas Law Review.

Subscribe to Lawfare