Published by The Lawfare Institute
in Cooperation With
You might have missed it amid all the sound and fury of impeachment, but it’s been a busy week for disinformation. Twitter, the Wall Street Journal reports, will start removing posts that it determines are “misleading about an election.” Elizabeth Warren’s campaign rolled out a plan on “Fighting Digital Disinformation,” including a pledge not to “knowingly use or spread false or manipulated information.” And the House Ethics Committee announced that members of the House of Representatives who share deepfakes on social media might face sanctions from the House itself. The ethics announcement attracted the least attention of all, but it’s actually an important step in creating standards for how elected representatives should use social media.
This news might seem trivial compared to the drama of the impeachment trial going on just down the street. But as technology companies and governments alike grapple with how to address the spread of online falsehoods, the committee’s memo is noteworthy. Four years after the shock to the system of 2016, everyone agrees that disinformation and misinformation are problems that need to be dealt with, but the question remains who is best positioned to accept responsibility. The House of Representatives appears to be, in some small way, beginning to take on the task.
The committee’s “pink sheet”—an advisory memorandum on House rules—alerts members of the House to the dangers of posting deepfakes on social media, warning that “manipulation of images and videos that are intended to mislead the public can harm … discourse and reflect discreditably on the House.” For this reason, disseminating “deep fakes or other audio-visual distortions intended to mislead the public” could violate the House’s Code of Official Conduct, which governs the behavior of the chamber’s members and employees.
This might sound like the committee is addressing a problem that doesn’t exist yet. As far as we know, there have not been any cases in which a member of Congress—or any other prominent American political figure—has tweeted a genuine deepfake, meaning doctored audio or video generated through machine learning than can produce extremely lifelike and misleading results. After all, deepfakes—though concerning—just aren’t all that common in politics (yet).
Politicians have, however, published plenty of what the memorandum describes as “other audio-visual distortions”—that is, photos or video deceptively manipulated in a less sophisticated manner than a deepfake. Sometimes the manipulation is obvious: President Trump recently tweeted a picture altered to depict him putting a Medal of Honor around the neck of a dog that played a role in the raid on Islamic State leader Abu Bakr al-Baghdadi (the original photo showed the president bestowing the medal to a Vietnam War medic). But U.S. political figures have published more deceptive images, too. Three days after the strike that killed Iranian general Qassem Soleimani, Rep. Paul Gosar tweeted a photo appearing to show President Obama shaking hands with Iranian President Hassan Rouhani; it took 40 minutes before he acknowledged that the picture was actually a fake, a doctored version of a shot from a 2011 meeting between Obama and then-Indian Prime Minister Manmohan Singh.
The world is a better place without these guys in power. pic.twitter.com/gDoXQu9vO5— Paul Gosar (@DrPaulGosar) January 6, 2020