Cybersecurity & Tech

A Comparative Perspective on AI Regulation

Itsiq Benizri, Arianna Evers, Shannon Togawa Mercer, Ali A. Jessani
Monday, July 17, 2023, 8:00 AM
The question isn't whether AI will be regulated, but how.
(Christiaan Colen, https://www.flickr.com/photos/christiaancolen/20446713629/; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/legalcode)

Published by The Lawfare Institute
in Cooperation With
Brookings

On May 30, approximately 350 artificial intelligence (AI) experts penned a letter to express significant concerns about risks associated with AI. The letter stated that “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The group’s letter is the latest in a string of warnings about the potential risks—both small and existential—that may result from the development and deployment of AI. Whether or not these concerns will end up being realized, there is consensus among key players in both the private and public sectors about the need for AI regulation now. But conceptions of responsible AI risk management and appropriate regulations are already diverging across jurisdictions. Below is a point-in-time effort to capture the differences between jurisdictions—with a focus on developments in the United States and European Union/United Kingdom, to better digest the rapid development of AI regulation across the globe. 

The question isn’t whether AI will be regulated, but how. Both the European Union and the United Kingdom have stepped up to the AI regulation plate with enthusiasm but have taken different approaches: The EU has put forth a broad and prescriptive proposal in the AI Act, which aims to regulate AI by adopting a risk-based approach that increases the compliance obligations depending on the specific use case. The U.K., in turn, has committed to abstaining from new legislation for the time being, relying instead on existing regulations and regulators with an AI-specific overlay. The United States, meanwhile, has pushed for national AI standards through the executive branch but also has adopted some AI-specific rules at the state level (both through comprehensive privacy legislation and for specific AI-related use cases). Between these three jurisdictions, there are multiple approaches to AI regulation that can help strike the balance between developing AI technology and ensuring that there is a framework in place to account for potential harms to consumers and others. Given the explosive popularity and development of AI in recent months, there is likely to be a strong push by companies, entrepreneurs, and tech leaders in the near future for additional clarity on AI. Regulators will have to answer these calls. Despite not knowing what AI regulation in the United States will look like in one year (let alone five), savvy AI users and developers should examine these early regulatory approaches to try and chart a thoughtful approach to AI.

The European Union

The AI Act

The European Union is aiming to be a world leader in the regulation of AI, in the same way that it took the lead with personal data protection and the General Data Protection Regulation (GDPR). Accordingly, the European Commission’s April 2021 AI Act proposal is sweeping. Recently, the European Parliament adopted its amendments to the AI Act proposal, which will be followed by negotiations with the commission, and European national governments to reach a final text. There is a good chance that the AI Act will be passed before the end of the year, but it is unlikely to be effective before mid-2025. It is for this reason that the commission is seeking to craft a voluntary AI Pact to mitigate the gravest risks associated with AI until the AI Act becomes effective. While the specific details of the AI Pact are unknown, it will likely see all major companies working in the AI field agree to transparency and accountability principles.

Unsurprisingly, a wide range of businesses have expressed concerns about the broad scope of the AI Act. It relies on a risk-based approach, which essentially means that the AI Act imposes different requirements in accordance with the level of risk of AI systems. Additionally, businesses that violate the act risk penalties of up to 30 million euros, or 6 percent of a company’s annual global turnover, whichever is the largest amount. The act will apply to providers that place AI systems on the market or in service in the EU, irrespective of their place of establishment. The AI Act will also apply to importers and distributors of AI systems and users of such systems who are physically present in the EU. 

There has also been much criticism of the commission’s proposed definition of AI. The commission’s proposed definition of the technology is software developed using certain techniques that could generate content, predictions, recommendations, or decisions influencing the environments they interact with. This definition is so broad that it encompasses virtually all algorithms and computational techniques. In contrast, the parliament’s negotiating position defines an AI system as a “machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” 

The commission’s proposal differentiates among four levels of risks and imposes different requirements on each. Most AI systems currently used in Europe, such as AI-enabled video games or spam filters, present minimal or no risk at all to individuals and fall outside the scope of the AI Act. According to the commission’s proposal, these limited-risk AI systems will only be subject to transparency requirements. For example, the proposal requires those interacting with AI systems such as chatbots to be notified that they are corresponding with a machine. High-risk systems, however, are subject to stricter requirements. These are systems used in critical infrastructure that could put people’s lives at risk (for example, transportation technology); training that may determine access to education (such as automatic scoring of exams); safety components of products (such as robot-assisted surgery); employment (for example, resume-sorting software); essential private and public services (such as credit scoring software); law enforcement (such as police emergency call centers); border control management (such as AI-assisted analytics of migration flows and cross-border crime trends); and justice (such as the ability to automatically process incoming litigants’ applications). The parliament’s amendments are expanding the classification of high-risk areas to include AI systems that may harm people’s health, safety, or fundamental rights, or the environment; AI systems to influence voters in political campaigns; and AI systems used by very large social media companies to determine what content to promote or demote to users.

The most prominent requirement for high-risk AI systems is the obligation to carry out a conformity assessment before placing the product on the market. This assessment is intended to confirm that the system builds on an adequate risk assessment, proper mitigation systems, and high-quality data sets to ensure that the system is legally compliant and technically robust. The assessment should also confirm the availability of documentation so that regulators can assess the system’s compliance; logging of activity to ensure traceability of results; transparency and information to users; appropriate human oversight; and robustness, accuracy, and security. Additionally, the AI Act prohibits AI systems that present unacceptable risks. These are systems that pose a clear threat to people’s safety, livelihoods, and fundamental rights, such as children’s rights and the rights to human dignity, nondiscrimination, privacy and family life. Examples of prohibited AI systems include social scoring by governments (such as in China), toys using voice assistance that encourages dangerous behavior, or remote biometric identification in publicly accessible spaces used for law enforcement purposes. Notably, the parliament’s amendments are expanding that list to include biometric categorization systems using sensitive characteristics, such as gender or ethnicity, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

In addition, the parliament has also addressed the hype over ChatGPT by including additional obligations for so-called foundation models—in other words, AI systems designed to be adapted to a wide range of distinctive tasks. Under their proposal, generative foundation models like ChatGPT would have to comply with additional transparency requirements, such as disclosing to the user that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.

Other Relevant Instruments

The AI Act is not the only instrument for regulating AI in Europe. Under the GDPR, save some exceptions, people have the right to not be subject to a decision based solely on automated processing that produces legal effects concerning them or similarly significantly affecting them (for example, automated loan processing). Even where such decisions are allowed, people have the right to contest them. In any event, the GDPR provides that such decisions should not be based on sensitive data, such as personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or sexual orientation.

The commission also proposed a directive on AI liability and a revised directive on liability for defective products in September 2022. The proposed directive on liability of defective products is designed to hold manufacturers liable for certain damages caused by defects in their products, which includes AI systems, better enabling victims to prove their product liability claims. It is, however, too early to say when these proposals will be effective since they are still in the very early stages of the legislative process.

National Initiatives

Although regulatory efforts are concentrated at the European Union level, EU countries are not standing still. France and Germany launched national AI strategies in 2018, followed by Spain and Italy in 2020. And data protection supervisory authorities have already started to take action. In October 2022, Frances’s Commission Nationale de l’Informatique et des Libertés (CNIL) fined Clearview AI 20 million euros for violating French users’ privacy. Clearview AI collects photographs from a wide range of websites and sells access to its database through a search engine in which an individual can be searched using a photograph. The CNIL ordered Clearview AI not to process data of individuals located in France if there was no legal basis to do so and to delete any data already collected. 

In May, the CNIL published an action plan for AI to anticipate and respond to threats posed by the technology. In March, the Garante, the Italian data protection supervisory authority, imposed a temporary ban on ChatGPT after it learned that users’ chat titles and payment information were exposed in a data breach. Some of the Garante’s key concerns about ChatGPT were lack of transparency, legal basis underpinning the processing of personal data to train the algorithms on which ChatGPT relies, and accuracy in the answers it provided to users. In April, the Garante lifted the ban because of OpenAI’s (the ChatGPT developer’s) efforts to address its concerns. According to the Garante, OpenAI expanded its privacy policy and made it accessible from the sign-up page prior to registration with the service, introduced mechanisms to enable individuals to have inaccurate information erased, and enabled European users to opt out from the processing of their personal data. Despite these changes, data protection supervisory authorities in France, Germany, and Spain also opened inquiries into ChatGPT and OpenAI. As a result, the European Data Protection Board, which gathers all national data protection supervisory authorities across Europe, decided to launch a dedicated task force to foster cooperation and exchange information on possible enforcement actions against OpenAI. These efforts are likely just a taste of what is to come in the next few years in Europe given the regulators’ anxiety about AI as it develops and becomes even more popular and powerful.

The United Kingdom

U.K. Prime Minister Rishi Sunak and his government have staked out an aggressive position in relation to AI regulation, looking to lead global regulators by providing a business-friendly answer to the EU’s skepticism. On June 3, Sunak reportedly expressed a desire to President Biden to set up a global AI watchdog authority, modeled after the International Atomic Energy Agency but headquartered in London. He also tweeted recently, “Done safely and securely, AI has the potential to be transformational and grow the economy.” In April, Sunak announced the formation of a task force with 100 million pounds in funding to “develop the safe and reliable use of … AI … across the economy.” This announcement followed on the heels of the March 29 U.K. government white paper entitled “A pro-innovation approach to AI regulation.” The U.K. has indicated that it will “avoid heavy-handed legislation” in this space and empower existing regulators instead of establishing a new and separate regulator. It also “initially … [does] not intend to introduce new legislation” for fear of overburdening businesses. The white paper was open for comment until June 21, and the U.K. government is analyzing provided feedback with the intention of issuing an “AI Regulation Roadmap.” Meanwhile, Sunak has come forward with a bevy of proposals he plans to discuss with President Biden. Sunak’s plays for global leadership include the London-led global AI authority proposal mentioned above, another proposal for an international AI research body hosted in London, and an international AI summit in London this fall. While the white paper is key to understanding the U.K.’s current position, the global power struggle introduces key dynamics that could significantly shape the U.K.’s next steps.

The white paper defines AI in reference to “characteristics that generate the need for a bespoke regulatory response” as follows:

The “adaptivity” of AI can make it difficult to explain the intent or logic of the system’s outcomes:

  • AI systems are “trained” – once or continually – and operate by inferring patterns and connections in data which are often not easily discernible to humans.
  • Through such training, AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.

The “autonomy” of AI can make it difficult to assign responsibility for outcomes:

  • Some AI systems can make decisions without the express intent or ongoing control of a human.

In this approach, the white paper follows the Department for Digital, Culture, Media and Sport’s July 2022 recommendation, which makes the point that a static definition may not be sufficient to capture current and future applications of AI. Whether this definition is more or less broad than the EU AI Act definition is likely to depend on its application in practice.

On risk assessments, the U.K. has decided to take a “context-specific” approach to regulation, by looking at the “outcomes AI is likely to generate in particular applications.” In other words, risks will not be assigned to specific technologies or sectors. Instead, regulators will look at general outcomes and weigh them against opportunity costs.

The country famous for its pervasive use of CCTV seems to be poised to take on new AI challenges, but reliance on existing regulators and processes raises practical questions. How will the Information Commissioner’s Office, the Financial Conduct Authority, and the Equality and Human Rights Commission collaborate over time? Will they have the resources and the expertise to handle the developing technology? To its credit, the white paper contains a proposal for centralized functions, including a Central Risk Function, with a remit that appears to include coordinating and deconflicting roles, while regulators will “identify and prioritize new AI risks in their sector.” But risks with this technology don’t cut cleanly across sector lines. It’s unfair, however, to suggest that the government hasn’t considered this. The white paper explicitly states, “Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.” Another challenge to the U.K.’s ability to lead in AI regulation is its post-Brexit exclusion from the U.S.-EU relationship and resulting cutting-edge cooperative discussions taking place between the two parties. 

The United States

At the Federal Level 

The approach to regulating AI in the United States has been piecemeal, with the Biden administration and various federal regulators assessing the benefits and risks of the technology and issuing guidance under existing legal regimes. The major question is whether existing laws and regulators will be able to provide adequate guardrails for the technology, or whether an entirely new approach to AI governance will be needed. 

Each week, it seems as though there is some major federal announcement relating to AI. Recently, the Biden-Harris administration has taken a number of actions that it states will further innovation in AI while also protecting people’s rights and safety. In early May, for example, the White House announced three AI initiatives that fund responsible AI research, provide for independent community evaluation of AI systems, and begin the process of establishing U.S. government-wide AI policy. Also in May, the White House announced an update to the National Artificial Intelligence Research and Development Strategic Plan, a Request for Information for public input on national priorities for mitigating AI risks, and a new Department of Education report on the risks and opportunities related to AI in education. All of these are against the backdrop of the administration’s October 2022 release of the Blueprint for an AI Bill of Rights, which sets forth five principles—that apply to all sectors—that are intended to guide the responsible use of AI systems. 

Another federal agency, the National Institute of Standards and Technology (NIST), published an alternate (but not incompatible) framework, the AI Risk Management Framework, that discusses how organizations can frame the risks related to AI, as well as characteristics of trustworthy AI, and gives them a road map to help address the risks of AI systems in practice. NIST followed this up by announcing a new public working group focused on AI in June. The goal of this working group is to specifically address the risks associated with generative AI.

Federal law enforcement agencies have made clear that even absent a comprehensive law that governs AI, their existing authorities apply. The Consumer Financial Protection Bureau, Department of Justice, Equal Opportunity Commission, and Federal Trade Commission (FTC) issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, pledging vigorous use of their respective authorities to protect against discrimination and bias in automated systems. Even if it may not be immediately apparent how exactly those authorities apply to technological changes, the joint statement is a reminder that entities need to thoughtfully approach how they deploy automated systems that are used to make important decisions about individuals to ensure those decisions align with the law. Over the past few months, the FTC has published a number of posts on its Business Blog illustrating exactly how it believes its existing Section 5 unfairness and deception authority can be used to rein in uses of AI that mislead consumers or that do more harm than good. 

Congress is also turning its attention to AI. Sen. Chuck Schumer (D-N.Y.) recently announced an AI regulatory framework—the SAFE Innovation Framework. This is a principles-based approach to AI regulation that focuses on the following central policy objectives: security, accountability, foundations, explanations, and innovation. While there are not yet specifics on what a potential federal AI law that follows these objectives would look like, these foundational principles are similar to the issues that the EU is focused on through the AI Act. 

Even outside of Senator Schumer’s framework, there is an active appetite in Congress to oversee and potentially regulate AI. For example, in just one week in May, there were three hearings on AI: one about AI in government, another about AI and intellectual property issues, and one focused specifically on how to regulate AI. The hearings suggest that major questions for Congress in regulating AI will be whether there should be a licensing or registration requirement, whether there will need to be a new independent agency or commission to oversee the technology, and what factors will need to be part of any risk assessment and mitigation requirements. 

At the State Level

So far, the United States has not adopted a comprehensive approach to AI regulation at the state level. Instead, states have regulated AI primarily either through specific provisions adopted as part of comprehensive privacy laws (such as the California Privacy Rights Act (CPRA) or by creating specific obligations for companies that use AI in certain contexts (such as employment).

In terms of the state comprehensive privacy laws, some of the U.S. state laws in effect or set to go into effect in the near future (such as in California, Colorado, Connecticut, and Montana) create an opt-out right for consumers as it pertains to the use of their personal data for “profiling in furtherance of solely automated decisions produce legal or similarly significant effects” concerning consumers. (This specific language is from the Colorado law, but the other states with this requirement use similar language.) This is a similar requirement to the opt-out requirement that exists for this type of processing under the GDPR and U.K. GDPR. These laws generally define profiling as “any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable individual[.]” The “legal or similarly significant effects” that these laws aim to regulate include decisions made by controllers related to financial or lending services, housing, insurance, education, employment, criminal justice, health care, and other necessities, such as food or water. (Again, there may be minor variations between the language used in the Colorado law compared to the other states, but the general principles are the same.)

These automated decision-making opt-out provisions essentially provide a mechanism for consumers to obtain some level of control over how their data is used with respect to technology they may not fully understand. They may be particularly important for certain use cases (such as education and housing) where a person’s livelihood may be impacted by automated decision-making, and these opt-out provisions may also require companies to provide additional transparency regarding how those important decisions are made. The automated decision-making provisions of these state comprehensive privacy laws essentially complement the rights that consumers may have available to them under state and federal anti-discrimination laws, as well as under the federal Fair Credit Reporting Act—these laws also provide recourse to consumers for their data use (depending on how exactly their data is being used and for what purpose). The same is true for more narrow state AI laws that regulate specific, high-risk use cases (such as the employment AI laws that have passed in Illinois and New York City).

Even outside of the specific provisions related to automated decision-making, the U.S. state comprehensive laws have other provisions for companies that use AI. For example, one of the issues that is relevant for AI development is the type of data that a particular tool is trained on. Generative AI tools are generally trained on texts, articles, websites, and other data sources, and these data sources may fall under the purview of applicable data protection laws, depending on how exactly they are used. This is because of how broadly personal data is defined under these new privacy laws, encompassing all data that relates to an identifiable person (again, similar to the GDPR). To the extent that a company uses information for training its AI models, it will have to assess whether this underlying data is subject to these comprehensive privacy laws (and is subject to all of the potential compliance obligations in relation to this data, such as by providing consumers with data subject rights). Conversely, a company may determine that the data it uses for training its models is unaffected by the comprehensive privacy laws because the data meets the definition of “deidentified” data (that is, data that cannot reasonably be linked to an identifiable individual) or “aggregated” data (that is, data that relates to a group of individuals and cannot reasonably be linked to specific individuals) under the relevant laws—data that meets these categories generally falls outside of the definition of “personal data” and is thus not regulated by these privacy laws. Either way, conducting this analysis will be relevant for any company looking to build out its AI model in a privacy-compliant manner. 

In addition to these laws, there has been at least one “comprehensive” AI framework proposed in the U.S. at the state level—California’s Assembly Bill 311. If passed, this proposal would require companies developing “consequential” AI products (related to employment, education, housing, and the like) to conduct impact assessments; provide notice and opt-out rights to California residents; and implement a governance program that contains reasonable administrative and technical safeguards to address the reasonably foreseeable risks for algorithmic discrimination potentially associated with the AI tool. The law would be enforceable by the California attorney general and would also contain a limited private right of action. Though this bill is currently still in committee, the California legislature tends to proactively regulate tech policy issues (see the CPRA), regardless of what other states or Congress are doing. Other states have followed California’s lead on these issues, which may also be the case in this instance if Bill 311 becomes law. 

Conclusion 

While the United States may seem behind compared to its EU counterparts, there are still relevant rules that U.S. companies operating in the AI space need to be aware of. And, as demonstrated by the GDPR, movement overseas on these topics often inspires similar legislation in the U.S. (at least at the state level), as well as in other countries. 

There is some preliminary coordination between the United States and the EU in the form of a forthcoming AI Code of Conduct, discussed recently at the U.S.-EU Trade and Tech Council (TTC). At the TTC, the EU called for the quick drafting of an AI Code of Conduct to which businesses might voluntarily adhere as they wait for more permanent, long-term legislation. U.S. Secretary of State Antony Blinken was clearly amenable to collaboration under the auspices of the TTC, stating that he was “intensely focused on what [the U.S. and EU] can do together to address both the opportunities and challenges posed by emerging technology[.]” The TTC parties intend to present the proposal to the Group of Seven this fall. The TTC also facilitated further implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management through the launch of expert groups looking at AI terminology and taxonomy, cooperation on AI standards, tools and risk management, and monitoring and measuring AI risks. It’s also worth noting that after Brexit, the TTC excludes the U.K., leaving Sunak’s government in a position to react to U.S.- and EU-created policy, rather than helping to shape it from the inside.

It is also important to note that, similar to the GDPR, the EU AI Act has extraterritorial reach. Thus, both U.S. and other international companies may have to eventually look to the EU AI Act by default when looking for a relevant regulatory framework (that may also help mitigate their compliance risk). 

Whether there will eventually be a true global standard for AI regulation remains to be seen. However, companies might find that they wasted time if they chose to wait to develop AI risk mitigation strategies in the absence of a clear regulatory framework. Keeping an eye on the rapid development of international codes of conduct and other interim standards will help companies efficiently prioritize implementing the standards that are both applicable and appropriate to their operations, and likely to play a role in forthcoming regulatory regimes. 

 


Itsiq Benizri is a counsel at WilmerHale Brussels. He focuses his practice on EU data protection, cybersecurity, and Artificial Intelligence law. He is qualified as a Certified Information Privacy Professional (CIPP/E) by the International Association of Privacy Professionals (IAPP) and is a member of this association. He is a graduate of the College of Europe, the European University Institute, and the Brussels School of Artificial Intelligence.
Arianna Evers represents clients in high stakes matters relating to privacy, cybersecurity, and emerging technologies, including AI. Ms. Evers represents clients in investigations and litigation brought by the FTC, state attorneys general, and other regulators, as well as counsels clients on data security and privacy requirements and best practices under federal and state laws, regulatory guidance, and government and industry recognized frameworks. Ms. Evers has advised clients building foundational AI models as well as those looking to leverage AI in their day-to-day operations on implementation concerns, risk management, and fairness, discrimination, and bias. She provides a thoughtful but practical approach to managing legal and reputational risks in an area where the law is moving quickly.
Shannon Togawa Mercer is a senior associate at WilmerHale. Her practice focuses on complex global data protection, privacy, and cybersecurity matters. Ms. Togawa Mercer has extensive experience counseling clients on cross border data protection and privacy compliance as well as cyber incident response. She has practiced in London and Washington D.C. and previously served as Managing Editor and Senior Editor at Lawfare. Ms. Togawa Mercer also served as National Security and Law associate at the Hoover Institution.
Ali A. Jessani is a senior associate at WilmerHale. He counsels clients on the privacy, cybersecurity and regulatory risks presented by new and proposed uses of technology and consumer information, including generative AI. Specifically, he advises clients with compliance issues related to federal and state laws governing data sharing, ownership and protection. He also serves as an adjunct professor at the Antonin Scalia Law School at George Mason University.

Subscribe to Lawfare