Cybersecurity & Tech Surveillance & Privacy

Modularity for International Internet Governance

Chris Riley, Susan Ness
Tuesday, July 19, 2022, 8:31 AM

The internet is global, but the laws that govern it are not; designing digital platform regulations around shared modules can help relieve this tension.

Global internet (mohamed hassan, https://pxhere.com/en/photo/1441535; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The modern-day “global” internet faces a dubious future. On the battle lines of internet freedom, Russia’s increasing authoritarian control aspires to China’s great firewall levels, while the annual Freedom on the Net report for 2021 found a global decline in internet freedom for the 11th straight year. The same report also noted that at least 48 separate countries explored increasing governmental oversight over the tech sector. 

In the midst of increasing global division lies, perhaps, a core of unity: a worldwide interest among democracies in changing the status quo of internet governance to improve the baseline of responsibility and accountability for digital platforms. And for this problem, at least, there is hope—perhaps distant hope—for the possibility of increasing alignment. We propose that modularity can be a useful and tractable approach to improve digital platform accountability through harmonized policies and practices among nations embracing the rule of law.

Modularity is a form of multistakeholder, co-regulatory governance, in which modules—discrete mechanisms, protocols, and codes—are developed through processes that include a range of perspectives. Modularity produces, to the extent possible, internationally aligned corporate technical and business practices through shared mechanisms that achieve compliance with multiple legal jurisdictions, without the need for a new international treaty.

Think of modularity as a five-step process. First, problem identification: One or more governments—working together or separately—identify an open challenge. For example, vetting researchers as part of a digital platform data access mandate. Second, module formation: A group of experts (which may or may not include government representatives) collaborates to develop a module that includes both standards and processes for addressing the problem, and is designed for use across multiple jurisdictions. Third, validation: Individual governments evaluate and approve the module by indicating that its output—such as a decision that individual research projects should be cleared to receive platform data—can be used to satisfy requirement(s) set out in their respective underlying legislation. Fourth, execution: Systems created through the module apply the module’s protocols to individual circumstances. (In this instance, vetting research projects applying for clearance.) Finally, enforcement and analysis: Each government uses its national policies and procedures to ensure digital platform compliance, and periodically assesses the module process to ensure it remains fit-for-purpose. 

Modularity offers many advantages for digital platform governance. It helps norms and expectations evolve along with rapidly evolving technology, while maintaining the force of law, without the obstacles and delays inherent in separately amending each of the underlying laws. And it helps close substantive gaps present in many platform legislative frameworks being developed today. But making it a reality will require governments to be willing to embrace an aligned path forward through disparate legal and political systems.

Global internet aspirations increasingly conflict with regional legal frameworks.

Greater global alignment of user experiences, protections, and access to digital platforms benefits all stakeholders. For businesses, it can improve predictability and reduce inefficiencies, such as building, operating, and maintaining separate systems for separate regions. The 2016 report of the Digital Economy Board Advisors noted that “global interconnectedness is precisely what enables small and medium-sized businesses to operate and grow their markets internationally, and undue restrictions on the stream of information can impede their ability to compete.” For governments, it can reduce the cost of developing detailed regulations. And for individuals, it can maximize the impact of global conversations and advocacy.

Yet, the internet has not broken down the Westphalian nation-state system, despite the latter’s inherent tensions with the notion of a global network and citizenship. And as we close the book on the decades-long experiment of a technology regulatory paradigm headlined by self-governance, governments are reasserting their sovereignty by advancing their own legislative and regulatory proposals rather than pursuing shared legal frameworks. In fact, constitutional limitations often preclude the adoption of identical statutes for many states, with conflicts between the U.S. First Amendment and European speech regulations serving as a prime example.

The gulf between the United States and the European Union on privacy law is legion. In 2018, the EU’s General Data Protection Regulation (GDPR) established a fundamental baseline of protection above and beyond the totality of applicable laws in the United States, setting a de facto standard for the rest of the world in the process. The U.S. and EU institutions signed a “Privacy Shield” agreement in hopes of facilitating continued transatlantic digital commerce, but it was struck down by EU courts in 2020 as insufficiently protective of European citizens’ data. An alternative legal basis for data transfers, so-called standard contractual clauses, was recently declared insufficient by the Irish Data Protection Commission, casting the future of data flows back in doubt. While a political agreement on a new and improved Privacy Shield was reached in March 2022, the details have yet to be finalized. Even then, further court and regulator approval remains uncertain.

 

A similar gulf lies ahead for digital platform law with the European Union’s landmark Digital Services Act (DSA), which the European Parliament and EU member states agreed to in April 2022. The EU envisions that, like the GDPR, the DSA will set a new legislative standard for the Western world. It could create a substantial gap between the behavior demanded of technology companies in the United States and in Europe. The Transatlantic Trade and Technology Council might help reduce this gap, but absent major legislation by the U.S. Congress, effective alignment remains impossible. 

Many other countries are actively building their own digital platform legal frameworks as well. For example, the United Kingdom has been working on a new Online Safety Bill that would impose a broad range of obligations on digital platforms, and Australia updated its existing Online Safety Act in 2021 to add new provisions for online services including app stores and search engines. The emerging differences among these frameworks, as well as the lack of action in the United States, pose new and significant hurdles for the global flow of data and digital commerce.

Calibrating slow-moving laws to fast-moving technologies is also a challenge.

In parallel to growing transnational regulatory differences, how to calibrate private-actor rules through blunt legal instruments stymies effective governance. Because online content and user behavior are constantly evolving, so too are the harms that arise from ubiquitous digital platforms. And as these harms evolve, maintaining the same level of responsibility and accountability requires digital platforms’ policies and practices to evolve as well. Ideally, governance frameworks would create incentives for continued investment and assessment, resulting in an improved baseline of behavior. But that is a hard goal to implement within static law.

Platform accountability is not the only issue that faces a seemingly intractable problem of incorporating multiple stakeholder viewpoints into an ever-shifting landscape. Cybersecurity presents a useful point of comparison. In the United States, legislation gives few prescriptions for sufficiently responsible data security practices. The multistakeholder Cybersecurity Framework developed by the National Institute for Standards and Technology (NIST), in contrast, offers a comprehensive road map for improving corporate system security across a range of dimensions and baseline risk profiles. Overall, cybersecurity policy has by and large accepted the impossibility of perfection and embraced continuous improvement, driven by inclusive stakeholder consultation and collaboration as a paradigm. Modularity for platform accountability issues offers a similar path forward: a means of making it easier to adapt expectations for corporate practices through diverse input without the need for legislative change—and ideally would align other countries as well.

Encouraging similarities have emerged in the most recent examples of platform accountability legislation today. These laws expressly incorporate external stakeholder input as part of forward-looking evolution. For example, the DSA looks to international standards bodies to craft voluntary platform audit standards and incorporates industry and civil society participation into developing codes of conduct. Such standards and codes can presumably be changed over time as technologies and practices change, with greater ease than updating formal legislation. Similarly, the United Kingdom’s draft Online Safety Bill requires Ofcom, the U.K. communications regulatory commission, to consult with stakeholders in drafting regulations and codes.

Audit standards and researcher access exemplify the gaps in current laws and the opportunity for modularity to add value.

The DSA, like many other technology laws, includes many substantive provisions that resemble high-level principles more than precise, clear behavioral expectations, leaving implementation to subsidiary rulemaking. Or, as Mozilla’s Owen Bennett remarked after the EU’s final intergovernmental agreement on the DSA: “[T]he work is just getting started.” Bennett identifies risk matrices, auditing, and effective risk mitigation as examples of areas where “crucial norm and standard-setting” work remains unfinished—areas ripe for modular development.

Take auditing, for example. The DSA obligates large platforms to conduct independent, third-party audits. The auditors are required to have relevant expertise, but the law does not specify the vetting criteria for suitably competent auditors. Similarly, no processes or review mechanisms are suggested to ensure that auditors have the necessary access for their work; there is merely a statement of obligation to the platform to provide access. An audit module could be created through collaboration across national borders with professional auditors, platform policy and integrity workers, and third-party stakeholders including civil society advocates and, where appropriate, government experts. Such a module could establish vetting mechanisms and minimum standards for hiring audit firms and conducting audits, as well as an oversight regime. The EU, alongside other legal jurisdictions, could then recognize the module in its enforcement mechanisms, granting the resulting audits powerful legitimacy.

Researcher access represents a similar compelling opportunity for modular regulatory design. Article 31 of the DSA requires large platforms to provide access to platform data to vetted outside researchers who meet specific statutory criteria, including independence, necessity and proportionality, and protections for data security; researcher vetting is reviewed by the European Commission or a member-state coordinator. In the U.S., the proposed Platform Accountability and Transparency Act (PATA) requires platforms to provide vetted researchers with access and authorizes the National Science Foundation to develop appropriate criteria. Both the DSA and PATA could be operationalized in a manner that allows for the use of common modular researcher vetting processes to be developed by a multistakeholder body that could include researchers, platform representatives, and representatives of the European Commission, the U.S. government, or other governments authorizing the module. This would mitigate any concerns over government dictating independent researcher activity, and the system could be updated across borders based on experience.

The European Union is of course fully capable of drafting EU-specific rules for audits, researcher access, and other remaining open questions in the DSA. But the EU’s next steps will be critical for the long-term viability of the DSA as technologies evolve, as well as for the scale of the DSA’s global influence. An intentionally modular approach would strengthen the DSA by making it more adaptable to evolving technology, and would facilitate adoption of compatible provisions in other countries, especially if the modules are co-developed through collaboration with other governments and stakeholders.

Modularity requires new thinking about governance. 

By emphasizing multistakeholder engagement, modularity may appear similar to existing processes of rulemaking, such as the “notice and comment” rulemaking of administrative agencies under the U.S. Administrative Procedure Act, or the public consultation process used by the European Commission in drafting the Digital Services Act. Similarly, international law experts will recognize modularity as a structure in some treaties. Notably, the Digital Economy Partnership Agreement, signed by Chile, New Zealand, and Singapore in January 2020, includes “modules” on issues such as business and consumer trust and digital identities.

Neither consultative rulemaking nor modular international treaties, however, is a good fit for current global internet regulatory challenges associated with platform accountability. Treaty processes involve too much time and too much compromise of individual sovereignty to allow them to produce outputs at the level of specificity needed for platform accountability. While many international collaborations have arisen in the context of internet governance, including notably the Christchurch Call and the recently announced Declaration for the Future of the Internet, these agreements articulate valuable principles and areas of alignment rather than implementable practices. Thus, they are not directly enforceable but, rather, function as normative exercises guiding platform practice and individual country public policy efforts.

While consultative rulemaking allows for broad input in the development of policy, at least in principle, it suffers the challenge of needing to be conducted entirely within each individual country’s specific procedural and political structures. Take audits, for example: Enforcing a statutory audit requirement requires first establishing vetting procedures for suitably qualified and independent auditors, setting minimum standards for audit scope, and developing mechanisms for oversight. These decisions are complex, and stakeholders in various legal jurisdictions will pursue those answers that advance their interests. This results in unnecessary differences in practical standards and outcomes and the likelihood that platforms will be obligated to comply with different and potentially conflicting obligations across different countries. Modularity avoids the limitations of treaties and consultative rulemaking by situating its substantive work in between individual sovereign legislative and enforcement stages.

The future well-being of the internet and the efficacy of new digital platform responsibility laws will benefit significantly if leading nations have the courage to experiment with a fresh approach to governance. Cross-border collaboration on common processes and codes of practice through modularity would facilitate greater regulatory consistency across jurisdictions, reducing conflicting requirements and implementation costs incurred by businesses, and improving compliance with lower regulatory costs for governments. Common modules adopted by multiple jurisdictions would be designed to update their processes based on experience to facilitate more timely responses in a rapidly evolving ecosystem without having to wait for each country to amend its rules. And ultimately, a transatlantic modular system of common protocols and mechanisms, developed through the participation of multiple rights-respecting countries, would strengthen shared goals of protecting democracy and human rights.


Chris Riley is a global internet policy and technology scholar, a distinguished research fellow at the Annenberg Public Policy Center, and works as the principal at Cedar Road Consulting and a senior fellow for internet governance at the R Street Institute. Chris is a former director of policy for Mozilla and former internet freedom program manager at the U.S. Department of State. He holds a Ph.D. in computer science from Johns Hopkins University and a J.D. from Yale Law School.
Susan Ness is a distinguished fellow at the Annenberg Public Policy Center, where she convened and co-chaired the Transatlantic High-Level Working Group on Content Moderation and Freedom of Expression, and the principal of Susan Ness Strategies, a technology policy consulting firm. She previously served as a member of the U.S. Federal Communications Commission. She is a member of the board of directors of Vital Voices Global Partnership, an NGO that invests in women leaders globally to improve the world.

Subscribe to Lawfare