Cybersecurity & Tech Democracy & Elections

What We Learned From Bloomberg’s Online Campaign

Bridget Barrett
Friday, March 6, 2020, 2:08 PM

Bullish digital campaigning can’t change hearts and minds at the polls—but it can change Facebook.

A podium at a February 2020 Michael Bloomberg rally in Arizona (Gage Skidmore/ BY-SA 2.0/

Published by The Lawfare Institute
in Cooperation With

Bullish digital campaigning can’t change hearts and minds at the polls—but it can change Facebook. After spending $570 million and winning only 60 delegates in a lackluster performance on Super Tuesday, Michael Bloomberg dropped out of the Democratic presidential primary on March 3. Bloomberg’s campaign may be over, but his impact on election-year platform governance will be felt for long after the race ends.

The campaign leveraged Bloomberg’s deep pockets to flood social media platforms with ads and sponsored content, or paid promotions posted by so-called influencers. The ad blitz forced the platforms into action. Ultimately, the Bloomberg campaign prompted Facebook and Instagram to change their policies on branded content by political candidates. The response to the Bloomberg campaign provided yet another example of how malleable platforms’ policies and digital campaigning norms are in the absence of federal laws and administrative guidance.

From the moment the former New York City mayor announced his candidacy, the Bloomberg campaign aggressively bought television and digital ads. But in mid-February, it began pursuing less traditional modes of political advertising.

On February 15, more than a dozen Instagram accounts posted memes featuring Democratic presidential candidate Mike Bloomberg. All of these memes were paid for by the Bloomberg campaign, and all of them included disclaimers such as “yes this is really #sponsored by @mikebloomberg.”

The Bloomberg campaign was not the first to try to use social media influencers, but it was the first that I know of to do so publicly on the national stage. (Since there is no database of such content, or transparency initiative looking into it, if other campaigns or political groups have used this approach, we simply don’t know.) Notably, the Bloomberg campaign did so in flagrant violation of Facebook’s rules for what it calls “branded content.”

At the time, Facebook’s policies for branded content explicitly stated that advertisers must use the platform’s “branded content tool.” Like most of Facebook’s policies, this policy also applies to Instagram, which is owned by Facebook. The branded content tool simply allows an advertiser’s page to be tagged by the account paid to post for it with a “paid partnership with” label. Importantly, this process gives Facebook and Instagram visibility into paid partnerships that they otherwise would struggle to identify.

Advertisers must request access to this tool to use it, and Facebook at the time was not giving political accounts access. Facebook’s decision not to give political accounts access to the tool meant that the platform had a de facto ban on political branded content. If campaigns paid influencers to post for them, they would have to do so without using the branded content tool and thus would have been in violation of Facebook’s policies.

The Bloomberg campaign did not use the tool when it paid for posts on those Instagram accounts. While the accounts paid by the campaign included disclaimers in the text of the posts, they were still in violation of Facebook’s stated policies.

On the left, a Bloomberg-sponsored Instagram post with no “paid partnership” label from the branded content tool. On the right, an example of a Bloomberg post that has since been updated to include such a label.

Facebook had a few options for dealing with this: First, Facebook could hold true to its policies and sanction the accounts that posted the branded content and the Bloomberg campaign for breaking its de facto rule. Or the company could simply allow the content to stand, ignoring the violation of its rules. Finally, Facebook could actually change its rules in the middle of an election to allow political accounts to use the branded content tool in response to the Bloomberg violation, thus greenlighting a strategy that every other campaign had been told was off-limits.

Facebook chose to take the latter two options. It let the posts stand and ultimately gave the Bloomberg account access to the branded content tool. The majority of the posts are now labeled with it. (At least four had not implemented the tool as of March 4.)

When faced with enforcement directed at a major political figure, Facebook backed down.

This was not the first time. In early October 2019, the Trump campaign ran an ad on Facebook with a claim that had been fact checked by third parties in late September and deemed false information. Facebook’s advertising policies prohibit ads with false information as determined by third-party fact checkers. Facebook’s vice president of global affairs and communications gave a speech about this very fact-checking program on September 24. But when the Trump campaign’s ad was called out by fact checkers and journalists as disinformation, Facebook didn’t take down the ad. Instead, it changed its fact-checking policy to exempt political figures, a move that allowed the campaign to continue running the ad. And it’s not just Facebook. Twitter has struggled to apply its rules to the president’s account, preferring to point to exceptions to its rules or to create new tools to avoid sanctioning the sitting president on social media.

Facebook’s response to the Bloomberg campaign’s paying meme accounts is only the latest example of the current state of play for political figures on social media: If you’re big enough, bold enough, and influential enough, the rules are negotiable.

This is not entirely Facebook’s, Instagram’s or Twitter’s fault. They are operating in an increasingly polarized political environment with no legal bulwark on which to base or defend their policies.

Where Are the FEC and the FTC?

Despite using disclaimers that many users in the comment sections of the posts found to be questionable, the Bloomberg campaign didn’t actually break any laws. In large part, this is because there are no laws to break.

The Federal Election Commission (FEC) and the Federal Trade Commission (FTC) have failed to develop rules and enforcement mechanisms for influencer marketing. Both administrative bodies have, at best, gaping holes in their rules for influencer marketing and, at worst, no rules at all.

For example, the FTC recently released guidelines for social media influencers to follow. In easy-to-understand language, the document gives clear examples of what is allowed and what is not. Unfortunately, the FTC doesn’t cover political speech, only speech relating to commerce. Even if it did cover political speech, the FTC has struggled to enforce its rules with meaningful sanctions. Social media companies can voluntarily opt to draw on the requirements outlined by the FTC. But the framework has yet to be systematically enforced in the commercial arena, let alone tried out in the political.

The FEC offers even less guidance. The FEC requires that “public communications” made on behalf of campaigns be accompanied by a “paid-for-by”disclaimer. But should posts made by influencers be considered “public communications” even though the definition of public communications “does not include communications made over the internet, except for communications placed for a fee on another person’s website?” The FEC rules provide no clear answer to that question. What’s more, questions like this are on indefinite hold, because the FEC lacks a quorum to issue advisory opinions or create and enforce its rules. Even before losing quorum, the FEC was far behind current technology in its rulemaking, failing to offer clear guidance on if and when Google search ads require a paid-for-by disclaimer.

While one billionaire’s (electorally unsuccessful) test of a novel political communication strategy may seem inconsequential—or at the very least a sideshow to more important digital campaigning issues—it is a clear sign of what’s to come. Our laws and regulatory efforts have massive holes that will only grow with new communication technologies and strategies.

The public (in my view unfairly) asks technology companies and social media platforms to fill these holes to make elections more transparent and fair, often at considerable expense. But these companies are demonstrably not up to the task. While Facebook has gone beyond existing political speech law to set and enforce rules for its platform (like enforcing its Community Standards prohibition on hate speech and maintaining an advertising transparency database), the company has failed to consistently enforce many of its policies.

So What?

Mike Bloomberg has dropped out of the presidential race. His campaign’s bold strategy failed to gain traction with voters. Facebook’s and Instagram’s caving on his actions didn’t make a difference at the ballot box. So why should we care about any of this?

For one, Bloomberg’s treatment raises fundamental questions about electoral fairness. The Bloomberg campaign has taught us that rules on social media platforms simply do not apply to major candidates and influential public figures in the same way they apply to everyday Americans and down-ballot candidates. And those candidates who are most willing to act boldly and without apology are likely to have distinct social media advantages over those who follow the rules.

The Bloomberg campaign, and the Trump campaign before it, faced no negative consequences from Facebook for their brazen disregard of Facebook’s own rules governing political speech. With a nonfunctioning FEC and a set of platform companies unwilling to enforce their own policies, there are increasingly few ways to hold campaigns accountable for misbehavior online.

Facebook and Instagram have adopted policies prohibiting misinformation about elections. False claims that election day has been postponed, for example, are now in violation of all the platforms’ rules. And there is no doubt that they will take quick action to remove such content when it is posted by clearly malicious actors.

But imagine this scenario: On the eve of the general election, a presidential candidate makes his or her final plea to voters through public posts, branded content, and ads on Facebook and Instagram. Within these posts and ads, sandwiched between policy statements and get-out-the-vote appeals, this candidate includes misinformation about who can vote and what candidates are running, and asks his or her supporters to interfere with the ability of the opposition’s voters to participate.

According to Facebook’s Community Guidelines, such posts are explicitly prohibited. According to recent examples, the candidate would likely be granted an exemption.

The cracks were already there—the Bloomberg campaign just made them apparent. The question is who will exploit them next.

Bridget Barrett is a graduate student and Roy H. Park Fellow at the University of North Carolina-Chapel Hill, and research lead for the digital politics research group at the Center for Information, Technology and Public Life (CITAP).

Subscribe to Lawfare