Cybersecurity & Tech Surveillance & Privacy

Time for Transparency From Digital Platforms, But What Does That Really Mean?

Heidi Tworek, Alicia Wanless
Thursday, January 20, 2022, 8:01 AM

U.S. lawmakers rarely agree these days. But across the political spectrum, most policymakers concur that digital platforms, including social media, messengers, and search engines, pose a problem.

Smartphone with apps. (https://pxhere.com/en/photo/163722; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

U.S. lawmakers rarely agree these days. But across the political spectrum, most policymakers concur that digital platforms, including social media, messengers, and search engines, pose a problem. They might not agree on what the problem is exactly—either a scourge of unfettered disinformation or a censorship of conservative views—but one approach for answering that question is gaining popularity: transparency reporting by digital platforms. 

In fact, transparency seems to be the one norm that everyone can get behind. From the Aspen Institute’s Commission on Information Disorder to the head of Britain’s Ofcom, a growing international chorus is calling for greater transparency of social media and other technology companies. Just under half of the 84 policy proposals examined in one literature review included recommendations for increased transparency reporting. 

It is heartening to find a point of agreement in such a contested and contentious policy arena. What transparency means in practice, though, is still an open question. It’s time for policymakers and scholars to lay out a real blueprint for what transparency can and should encompass. Otherwise, the concept risks becoming a meaningless buzzword, a putative panacea to all our online problems. 

Many proposals calling for more transparency offer a few details for quick implementation or are narrowly focused on one area such as digital ads or removal of extremist content. Many proposals imply different things when they discuss transparency. For some, it means greater reporting from companies, while for others, it means data-sharing for external analysis by academics, civil society or governments. What might go into a more comprehensive framework? What questions need to be addressed first? Are there lessons to be learned from other fields?

In this post, we set out some basics for transparency reporting by platforms and summarize current legislative efforts. We get into the weeds of potential categories for transparency reporting—a necessary step in order to decipher what transparency reporting can and cannot achieve. Finally, we consider the broader ramifications of mandating transparency from platforms—that the practice could encourage greater transparency from other media outlets and politicians. If policymakers want digital platforms to become more transparent, might this be an opportunity to apply similar principles to other media and, indeed, to politicians themselves? 

Transparency Reporting and Current Legislation

Transparency reporting is distinct from data-sharing by digital platforms because it does not necessarily require the transfer of raw data that can then be analyzed for other research purposes. Of course, with any exercise in categorization, the two concepts overlap in initiatives such as ad libraries, proposed in Rep. Lori Trahan’s U.S. Social Media Disclosure and Transparency of Advertisements Act, which might provide researchers with aggregated statistics on advertising practices, as well as the ads and related data themselves. After a difficult experience with data-sharing from Facebook, Nathaniel Persily is now proposing a Platform Transparency and Accountability Act that would create a division within the Federal Trade Commission to provide “privacy-protected, secure pathways for independent research on data held by large internet companies.” Such efforts are welcome and important, but issues remain, such as how long it will take to implement policies and the capacity of independent researchers to use those data. In the meantime, and in concert with such efforts, we suggest how transparency reporting by platforms could complement independent research.

Transparency reporting is the regular publishing of aggregated statistics shedding light on an organization’s operations. This type of reporting is an old phenomenon, though its exact form has differed over time and by country. Some countries have long required telecommunications services providers to report on government requests for subscriber information or wiretaps. In many democracies, public broadcasters report on their editorial procedures to ensure independence from the governments funding them. Indeed, the United Kingdom’s Ofcom reviewed transparency reporting practices on a number of topics including advertising, financial services and government disclosure on pollution levels. 

To date, transparency reporting by the wider tech sector has been mostly voluntary. Companies such as Apple, Facebook and Twitter have been left to determine how and when they undertake transparency reports, leading to ad hoc and unstandardized disclosures. As Andrew Puddephatt noted in his report for UNESCO, this approach can lead to biases or geographic gaps in data focused only on certain parts of the world, primarily in the Global North. Common types of transparency reporting have included requests for access to information, removal of accounts and content, and enforcement of policies. Yet existing transparency reporting is only scratching the surface. 

Several governments have begun introducing legislation that would regulate digital platforms to conduct regular transparency reporting. The German Network Enforcement Act, the U.K. Online Harms Bill, the EU Digital Services Act and the U.S. Social Media Disclosure and Transparency of Advertisements Act all require some form of transparency reporting by social media companies. The challenge is that most draft bills are scant on details about what transparency reporting might look like, leaving it to multistakeholder working groups to figure it out. 

A handful of international companies are facing new regulations from several countries, and potential variations from a hundred more. One potential future could look like the recent past with data privacy regulation. Between 2010 and 2020, 62 countries promulgated new data protection laws. The biggest change came with the European Union’s General Data Protection Regulation (GDPR) in 2018. Many countries then adopted or introduced data protection laws that resembled GDPR, including Brazil, Barbados and Panama. 

The same could happen with transparency reporting. Instead of focusing solely on what transparency legislation means for one jurisdiction, it makes sense to consider the needs of stakeholders around the world. If such a framework were developed to be inherently diverse, it could benefit countries, such as Finland or Fiji, whose populations speak less common languages and where platforms might have made fewer investments in terms of content moderators fluent in those languages, so long as standards look beyond a narrow socio-geographic landscape, such as the United States. That means giving policymakers and civil society from a greater range of countries and perspectives a say, even when discussing legislation for one jurisdiction. We have now seen multiple times that the first law developed to regulate platforms in one place can often be copied by others, including in authoritarian regimes with less regard for human rights. Developing a broadly harmonized approach would also ensure that platforms can comply swiftly with new rules. 

The Aims of Transparency Reporting

As a starting point, societies must debate the aims of transparency reporting by tech companies. Transparency for the sake of transparency sounds great, but having a clear purpose can streamline the development of a reporting framework. Is it to hold companies to account? Is it to inform on what data they have? Is it about rebuilding trust in the information environment? Is it about informing the next phase of regulation? It could be all of these things, it could be none, but the aim should guide the development of transparency reporting frameworks. Without a discussion about the basic aims of transparency reporting, policymakers risk getting lost in the weeds without considering the broader paradigms of why they are investing time and energy into regulating transparency in the first place. 

After determining some broad aims for transparency, it’s time to get into the nitty-gritty of the scope of transparency reporting. This sets the criteria for reporting, outlining who is obliged to report on what topics and how that information should be aggregated, disclosed, at what frequency and to whom. Any reporting requirements should also mandate how to store data related to transparency reporting; otherwise, it might become inaccessible as technology formats change. Policymakers should consider not just the next six months, but the next 60 years. Some observers, such as Mark MacCarthy, have suggested a tiered approach to transparency reporting, where some data is freely available to the public, while other data can be accessed only by researchers or regulators. 

Broadly speaking, there are at least nine categories for platform transparency reporting by digital platforms. Although these classifications may seem unnecessarily complicated, these details are necessary in order to reflect on whether and how transparency reporting fulfills broader aims or potentially creates unforeseen consequences. 

These categories of reporting emerged as part of a literature review conducted by the Partnership for Countering Influence Operations of more than 200 documents related to transparency reporting as well as more than 50 multistakeholder interviews on the topic. Additionally, the authors participated in multiple roundtables and working groups on content moderation and transparency over the past few years. These categories of reporting occur at the levels of:

  • The user. This might include aggregated data on different types of platform accounts, providing high-level details on demographics, including advertisers. Information related to users might include types of content posted publicly, patterns related to platform activity, and what services are available to which users and in what languages. On advertisers, data could show what types of ads are purchased by which types of users, targeting what audiences, where and in what languages. 
  • The platform. This would detail how the systems are designed and work—for example, how specific algorithms are designed and by whom, but also how those algorithms are applied to what services, including advertising and content curation. This could detect potential biases in algorithmic design and suggest new ways to test algorithmic impact. 
  • Policy development. This could encompass policymaking and enforcement, including in content moderation. While digital platforms have increased disclosure of policies, few public posts about policies contain date stamps to inform on how they change over time. To rectify this, Christian Katzenbach at Bremen University has created a Platform Governance Archive to house key policy documents from the major companies. This type of transparency would greatly help researchers. Policy development reporting might inform on how policies are created, by whom, how they are enforced and appealed (if at all), as well as how such measures are assessed for impact. 
  • Content moderation. While content moderation tends to follow platform policies, given the breadth of such enforcements, reporting on this practice warrants its own category. Such reporting could include how content is moderated, by what types of teams (internal or external), with what number of fluent speakers of each language covered, and where the teams are located. Explanations for how data shared with content moderators is secured, and what measures are in place to protect those exposed to graphic content, could also be included. 
  • Internal research. Recent leaks by former tech employees have shown the need for more transparency around internal research by industry. Such reporting would cover the types and findings of research conducted internally about the platform, to understand impacts on users and of their own interventions. In addition to sharing the findings, reporting on internal research might provide insights on the teams conducting such analysis, the decision-making process guiding it, including selection of methods, as well as ethical considerations in its design and how users were informed. 
  • External requests. Two similar categories of reporting are on external requests for interventions and data access. ​​Platforms should report how they process both types of requests, including what percentage of requests were accepted or rejected broken down by type of request and actor, and details around how results were communicated.  
    • Interventions. In reporting on external requests for interventions, platforms would disclose what types of third parties are asking for which types of action to be taken on what categories of user account, activity or content. This reporting might include locations of requesters, the languages spoken, and the reason why the request was made, shedding light on vulnerable communities who might need more protection or actors who attempt to game the system through reporting features. 
    • Data access. Similarly, reporting on external requests for data access would cover appeals from third parties to access the personal information of individual users or groups of users by myriad actors including law enforcement, academics, journalists and governments. This reporting would also cover criteria such as the location of the requester, the types of data sought for what purpose, as well as the decision-making process determining who gets access. 
  • Terms of service. Companies tend to rely on their terms of service to gain consent for how they use data generated by people engaging on their digital platforms. These agreements are often tomes written in legalese, which most users do not read before agreeing to them. Terms of service are distinct from platform policies, with the former documents outlining what the user agrees to, including how their personal data can be used by the service provider, while the latter inform the rules for how the platform can be used and is governed. Reporting on terms of service could help better inform users about what these cover, how they change over time and what triggers those changes, but also whether or not account holders can opt out, how long it takes most people to agree and the languages in which the terms are offered.
  • Third-party relationships. Researchers and governments aren’t the only ones seeking access to data from digital platforms. Several third-party service providers regularly access digital platform data to provide their customers with social media listening dashboards, targeted advertising tools and online tracking. Transparency reporting on third-party relationships could illuminate the scope of behavioral advertising systems, informing on what firms can access which categories of data and for what purposes. Such reporting should also outline platforms’ policies to govern the collection of this data and how they ensure third parties use it for the purposes that they claim.

Taken together, these are broad categories of transparency reporting types. They are not just quantitative and statistical but also qualitative. They would reveal not just how much content is taken down (which constitutes much of the current transparency reporting) but also how platforms behave. While the list of categories is already long, there may be others. 

How to Implement Transparency Reporting and How It Can Set the Standard Beyond Platforms

Ideally, a multistakeholder community would determine the full scope and priorities for a transparency reporting rollout to ensure that reporting addresses the most pressing concerns as quickly as possible. Indeed, networks and coalitions are forming to do just that. The Centre for International Governance Innovation has been convening civil servants, legislative staff and regulators through its Global Platform Governance Network. Several civil society organizations are building an Action Coalition on Meaningful Transparency with support from the Danish Foreign Ministry. And the Global Internet Forum to Counter Terrorism leads a transparency working group to guide reporting related to extremist activity online. 

Oversight of transparency reporting will also be needed to ensure reporting is accurate. Given a lack of trust in social media, auditing of corporate reporting by digital platforms will need to be independent and fully transparent itself. Audits would assess the performance of transparency reporting, verifying its accuracy and compliance with existing legal frameworks, such as GDPR, but also review the governance of such reporting and processes behind it. 

Auditing must be approached carefully, however, as the process itself can lead to mistrust. Other regulatory fields have shown how the selection of auditors can make or break a process. The revolving door between the pharmaceutical industry and the U.S. Food and Drug Administration (FDA) allegedly contributed to the FDA’s “improper handling of the opioid crisis.” The overdose epidemic has killed hundreds of thousands of Americans. And the FDA’s role in the overdose epidemic may have undermined trust in its approval of coronavirus vaccines. Any auditing processes should learn from the FDA’s mistakes, choosing auditors carefully and creating conditions around employment in the industry. 

While these initiatives are necessary, it would be naive to see transparency reporting as a panacea. Daphne Keller has called for humility about the potential of transparency to fix problems. For example, platform transparency most likely would not fix the fundamentals of a broken voting rights system in the United States. The public should also consider how transparency reporting could create perverse incentives. For example, if platforms are required to report on content deletion, that may push them toward more automated content deletion in an effort to “prove” their efficacy. 

But in other ways, transparency reporting can push for changes beyond platforms themselves. Such policies for digital platforms can serve as a starting point for other entities, including governments and more conventional news organizations. These entities have a way to go and have even pushed back against transparency.  For example, newspapers, including the Washington Post, actually won a lawsuit that invalidated a new requirement in Maryland for greater online ad transparency. Transparency reporting for platforms could start a broader conversation about transparency from TV, talk radio or other entities. Traditional media continue to play an important role within the information environment. Rebuilding trust in sources will require understanding editorial processes, financing (including who is advertising) and what content is most consumed by audiences. Some of this could be as simple as providing website analytics on articles, an export feature with which any knowledgeable content producer is familiar. 

Some governments have implemented transparency measures that are similar to some we suggest for platforms. For example, the Canadian government requires an algorithmic impact assessment, while the U.K. has just set new standards for algorithmic transparency. But they could go much further, revealing how often governments request take-downs from platforms or exposing the role of algorithms more broadly. As Virginia Eubanks documented several years ago, algorithmic decision-making often harms the poor in the United States. Requirements for greater transparency could help those most harmed by inequitable implementation of government algorithms. 

The many legislative proposals and think tank reports we have cited highlight the importance of transparency. Now, it’s time to delve into the details. And make sure that the buzzword of “transparency” does not fizzle into a bust.


Dr. Heidi Tworek is Canada Research Chair and Associate Professor of Public Policy and History at the University of British Columbia in Vancouver, Canada. She is a senior fellow at the Centre for International Governance Innovation as well as a non-resident fellow at the Canadian Global Affairs Institute and the German Marshall Fund of the United States. She is an award-winning researcher of the history and policy of communications.
Alicia Wanless is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. Wanless is a PhD Researcher at King’s College London exploring how the information environment can be studied in similar ways to the physical environment. She is also a pre-doctoral fellow at Stanford University’s Center for International Security and Cooperation, and was a tech advisor to Aspen Institute’s Commission on Information Disorder.

Subscribe to Lawfare