The European Union Changes Course on Digital Legislation
Published by The Lawfare Institute
in Cooperation With
Over the past decade, the European Union has assembled an edifice of legislation on privacy, platform competition, online content, and artificial intelligence (AI), firm in the belief that establishing a comprehensive and stable digital regulatory environment would encourage technology-driven growth and innovation in Europe.
Over the past year, however, doubt has emerged in Brussels about the presumed virtuous circle of regulation, growth, and innovation. Mario Draghi, former president of the European Central Bank, pointed out in an EU-commissioned report that the EU has fallen short in its efforts to develop an innovative tech economy, most notably in the area of cloud services, where American companies dominate.
Now, following Draghi’s prompt, the European Commission has dramatically changed course, in an effort to ensure that it is not left behind in the AI race. On Nov. 19, it proposed a two-part digital omnibus package making significant changes to its corpus of digital legislation. One measure would make changes to the General Data Protection Regulation (GDPR) along with other existing data-related measures, while a second would amend the AI Act.
An omnibus, in EU parlance, is a single legislative vehicle for amending multiple existing laws simultaneously without individually reopening each one. The EU member states and the European Parliament will have an opportunity to adjust the commission proposals, so final adoption will take months at best. Still, there is political urgency to the exercise, making decisive action during 2026 likely.
Proposed Changes to the AI Act
The digital omnibus on AI proposes changes to the AI Act, which was adopted in 2024 as a comprehensive risk-based regulatory structure for companies developing and deploying AI models and systems. It imposed transparency and risk assessment obligations on these companies, with the most stringent requirements applying to “high-risk” uses of AI, including a large range of applications such as medical devices and facial recognition software as well as general-purpose AI systems such as chatbots.
The headline change in the proposed digital omnibus is delaying enforcement of provisions on high-risk systems, which were scheduled to be enforced starting in August 2026. The commission says it wants to give itself and a technical committee more time to develop harmonized standards, common specifications, and guidelines necessary for enforcement.
Under the reform proposal, for AI Article 6(2) high-risk systems outside the existing product safety regulatory structure (such as biometrics or education), enforcement starts on the earlier of Dec. 2, 2027, or six months after the availability of the to-be-developed enforcement standards. For Article 6(1) AI systems already integrated into the EU’s product safety regulatory framework (such as medical devices), enforcement starts on the earlier of Aug. 2, 2028, or 12 months after standards availability. While this change delays enforcement by 16 or 24 months, it does not change the substantive requirements imposed on high-risk systems.
Other changes include:
Article 6(3) of the AI Act provides that AI systems that do not create significant risks are not high-risk systems subject to heightened safeguards. The reforms provide that companies claiming this exemption must document a self-assessment before putting the system on the market or in service, but they no longer need to file a public registration in the EU database.
Article 10(5) of the existing AI Act allows high-risk AI companies to use sensitive data to detect and mitigate bias. The proposal extends this permission to all AI systems and lessens the requirement of strict necessity to simple necessity.
The AI Act contains simplified regulatory compliance mechanisms. The reform extends these flexibilities to small and medium-sized enterprises and companies.
Under the proposal, the commission’s AI Office will enforce the AI Act when the same company develops a general-purpose AI model and an AI system on which it is based, and also when an AI system is incorporated into a very large online platform or search engine that is also regulated under the EU’s Digital Service Act.
The proposal requires the commission and the member states to foster AI literacy instead of requiring providers and deployers of AI systems to ensure a sufficient level of AI literacy of staff operating their AI systems.
Article 50(2) of the AI Act requires providers of AI systems to ensure that their outputs are detectable through a machine-readable format, so that users are able to use software to ascertain that they are dealing with AI-generated output. The proposal extends the deadline for compliance by six months to Feb. 2, 2027.
The EU AI Act authorized the commission to prescribe a template for a common, post-market AI monitoring plan. The proposal removes this authority and instructs the commission to provide “guidance” instead.
While these changes constitute minor modifications of the act’s requirements, they do not remove fundamental protections such as the requirement that providers of high-risk AI systems conduct a fundamental rights assessment. National authorities also retain substantial authority to impose additional measures beyond those set out in the AI Act in case a compliant AI system nevertheless poses a significant risk. None of these changes, moreover, affect the act’s ability to function as a comprehensive framework for responding to the risks and challenges of AI systems.
Proposed Changes to the GDPR
Several of the commission’s proposals for changing the GDPR also aim to ease the path for AI development in Europe. The EU has long seen the technology-neutral approach of its data protection law as a virtue, so tailoring it to accommodate a specific technology is a signal change in direction. Other important recent technology developments, such as blockchain, have not been deemed as meriting such special treatment.
One change is to trim back the GDPR’s expansive definition of “personal data” to exclude information where the entity holding it does not have reasonable means to identify an individual. In other words, the holder of pseudonymized data relating to a person would not be considered to have processed such data, even if downstream recipients could conceivably identify the person. This adjustment is hardly groundbreaking: It would simply codify the holding of a recent European Court of Justice judgment in the SRB case.
More significantly, entities processing personal data to develop and deploy AI systems and models would be able to rely on the GDPR’s “legitimate interest” legal basis—a significantly more flexible approach than having to laboriously document affirmative individual consent to such processing. The use of sensitive personal data in the context of AI systems would also be liberalized through a proposed exemption. Finally, the definition of “scientific research” in the GDPR would be amended to expressly acknowledge that it may further a commercial interest, that it likewise represents a legitimate interest, and that it is compatible with the initial purpose for which data was collected.
The European Commission has rather defensively characterized the omnibus proposals as “simplification, not deregulation, a critical look at our regulatory landscape.” Nonetheless, some observers view the changes to the GDPR as a partial retreat from the EU’s historic role as a global standard-setter for data privacy regulation. Anu Bradford, the Columbia Law School professor who coined the term “Brussels effect,” observed of the digital omnibus that “[w]hether you call it ‘simplification’ or ‘deregulation’, you are certainly moving away from the high water mark of regulation.”
The GDPR proposals have predictably caused disquiet in the EU privacy community. Max Schrems, of the advocacy group None of Your Business, criticized the proposals as “the biggest attack on Europeans’ digital rights in years.” The European commissioner responsible for the GDPR, Michael McGrath, responded by publicly reassuring that he had “no immediate plans” for further changes to the law.
Limited Impact of Trump Administration Pressure
Conspicuously, none of the changes would affect the Digital Markets Act (DMA) or Digital Services Act (DSA), the two EU digital measures most likely to be targeted by the Trump administration. Indeed, the same week that it launched the digital omnibus, the commission separately launched investigations into whether to designate the cloud services offered by Amazon Web Services and Microsoft as “gatekeepers” under the DMA, a step that would impose substantial additional regulatory responsibilities on the companies.
In addition, on Dec. 3, the commission launched an investigation of Meta under Europe’s traditional antitrust laws (not the DMA) for abuse of its dominant position. The concern stems from the company’s announcement in October that it would be limiting the ability of third parties to access WhatsApp users to offer AI products and services. On Dec. 5, the commission announced a 120 million euro fine on X for breaching its transparency obligations under the DSA. On Dec. 8, the commission opened an investigation to assess whether Google has breached EU competition rules in its use of web publisher material and YouTube content for AI purposes.
Based on these events, it seems that the commission’s digital omnibus proposal is more a reaction to home-grown fears that its digital regulations impede innovation by European companies than it is the result of outside pressure from the Trump administration.
Digital Sovereignty (Well, Where Achievable)
Whether coincidentally or not, the European Commission launched the digital omnibus during the same week as a high-level summit on European digital “sovereignty” in Berlin that drew German and French leaders, as well as EU commissioners. Digital sovereignty has been a rallying cry for French politicians for several years but now appears to have found its moment on the larger EU stage. At the Berlin conclave, French President Emmanuel Macron declaimed that Europe must not allow itself to be “turned into a vassal” in the U.S.-China digital rivalry, while German Chancellor Friedrich Merz more cautiously averred that “Europe must go its own digital way in a united effort, and this path must lead to sovereignty … where achievable.”
The summit also served as an occasion for French AI developer Mistral and German enterprise software company SAP to announce a joint effort to develop sovereign enterprise resource planning software that government agencies in Europe could use in lieu of American competitors’. Nonetheless, the growing European rhetorical solidarity behind digital sovereignty does not disguise the fact that the EU and its member states have hard decisions ahead about the extent to which they will restructure government technology procurement programs to favor European competitors to the U.S. tech giants.
***
The European Commission has not caved under the pressure of the Trump administration and big tech companies. Rather, the goal of the reforms is transparently to jump-start a homegrown European tech industry. In his comments on the release of the proposal, EU economy commissioner Valdis Dombrovskis emphasized the commission’s “commitment to give EU businesses more space to innovate and grow.”
But the commission might well be choosing the wrong policy lever to achieve that result. It is unclear to what extent the omnibus reforms will encourage EU firms to enter the AI market in Europe or expand their operations there. U.S. tech companies regularly complain about regulatory cost burdens, but are digital regulations actual barriers to entry and expansion that enter into the business calculations of European tech companies?
Insofar as European policy has contributed to the dearth of successful European tech companies, the target of reform might be the fragmented digital single market, underdeveloped capital markets, and punitive bankruptcy laws that deter risk-taking rather than digital rules designed to protect the public against business abuse, as Bradford has argued. EU companies are so far behind their U.S. and Chinese counterparts in cloud computing and large language models that a regulatory quick fix is not likely to be enough.
In its digital omnibus proposal, Europe may be imitating the wrong part of the U.S. administration’s support for the tech industry. Deregulation is part of the U.S. approach—but it is not the effective part. The more important pivot in the U.S. is to industrial policy for the tech industry, including the search for active ways for the government to aid strategically important industries such as chip manufacturers and developers of large language models. These tech industrial policies include subsidies, tax credits, expedited regulatory approvals, and priority access to energy for AI data centers.
In addition to the digital sovereignty measures discussed above, European policymakers might consider adopting and expanding measures such as the proposed Industrial Accelerator Act, which calls for up to 70 percent of the content of critical products to be made in Europe. By turning to regulatory relief instead of more active industrial policy measures, the EU might very well be lost in a shadow war of the past rather than forging forward-looking policies for the future.
