Cybersecurity & Tech Executive Branch

White House AI Framework Proposes Industry-Friendly Legislation

Jakub Kraus
Friday, April 10, 2026, 1:00 PM

While considering legislation for some major AI policy issues, the White House left others untouched.

The White House. (SinarBack 54H/Public Domain Pictures, https://www.publicdomainpictures.net/en/view-image.php?image=496726&picture=white-house; Public Domain).

On March 20, the White House released a “comprehensive” national framework for artificial intelligence (AI), three months after calling for legislative recommendations on the technology in an executive order that sought to curb certain state AI laws. The framework has already received support from influential Republicans in Congress, including House Speaker Mike Johnson (R-La.) and Sen. Ted Cruz (R-Texas), who will likely work closely with the White House to advance AI legislation aligned with the framework. On the other side of the aisle, Sen. Maria Cantwell (D-Wash.), who serves alongside Cruz as ranking member of the Senate’s commerce committee, said the framework “identifies key areas to address.” Thus, the framework offers a fairly clear sketch of which types of AI policy could become U.S. law before the 2026 midterm elections.

Preemption

Perhaps the most contentious part of the framework is its emphasis on preempting “cumbersome” state AI laws, particularly those that “impose undue burdens,” govern areas “better suited to the Federal Government,” or otherwise conflict with the White House’s goal of achieving “global AI dominance.”

Targets

Besides these high-level principles, the framework outlines three specific forms of regulation that states should be barred from engaging in. First, states should not “regulate AI development,” which likely refers to laws governing the process of creating AI models, as opposed to deploying or using them. For example, California’s Senate Bill 53 requires large AI companies to publish and comply with their own “frontier AI framework.” These frameworks describe each company’s approach to various aspects of risk management, including processes related to AI development, such as cybersecurity and built-in safeguards. But SB 53 also covers deployment of AI models, as it requires companies to report critical safety incidents to California’s Office of Emergency Services. If the White House cares primarily about development regulations, then SB 53’s frontier AI framework provisions—not its incident reporting requirements—are the likelier target for preemption.

Second, the White House recommends legislation that ensures states cannot “penalize AI developers for a third party’s unlawful conduct involving their models.” Existing legal doctrines may already expose AI developers to liability for downstream use of their models, but some state laws explicitly expand that exposure. For example, Colorado’s flagship AI Act creates a duty of care for AI developers whose systems make consequential decisions; these developers must protect consumers from risks of discrimination arising from “intended and contracted uses” of the AI system. Another interesting example—though it doesn’t create liability so much as remove a shield against it—is a law California passed last year, which states that in civil cases alleging harm from AI, including cases against AI developers, it “shall not be a defense, and the defendant may not assert, that the artificial intelligence autonomously caused the harm to the plaintiff.” 

Third, the White House argues that states should not “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.” This frames AI usage as an extension of Americans’ existing liberties, echoing the “Right to Compute” bills that have advanced through several state legislatures, with one becoming law in Montana. This portion of the framework may be aimed at laws such as Colorado’s AI Act, which requires businesses to conduct annual impact assessments, implement risk management programs, and meet other requirements when they use AI to make consequential decisions in areas such as hiring, lending, housing, and health care. These safeguards are designed to reduce the risk of unlawful discrimination in AI-assisted decisions. Humans making the same decisions without AI must still comply with existing anti-discrimination law, but they do not face Colorado’s extra layer of procedural requirements. However, Colorado’s law has yet to take effect, and it has a good chance of being amended in the near future.

In an interview, White House science and technology policy adviser Michael Kratsios suggested that the Trump administration would also extend this principle to preempt state laws “banning particular verticals.” Kratsios was specifically reacting to New York’s Senate Bill 7263, which aims to create liability for operators of chatbots that engage in the unauthorized practice of certain professions—for instance, chatbots offering legal advice. Although New York might not pass this bill, other states have already enacted laws regulating AI conduct that would be unlawful if performed by an unlicensed human. For example, both Nevada and Illinois have enacted laws that extend prohibitions on unlicensed therapy practice to AI chatbots. Notably, neither of these measures creates a pathway for chatbots to obtain any sort of professional license themselves.

Concessions 

The White House also calls for “respect[ing] key principles of federalism” and outlines three specific areas of state law that Congress should not preempt. One area is difficult to interpret: States should retain their traditional police powers to “enforce laws of general applicability against AI developers and users, including particular laws to protect children, prevent fraud, and protect consumers.” The phrase “general applicability” introduces serious ambiguity—it’s a legal term of art that, in some contexts, refers to laws that apply to a domain without deliberately targeting that domain. If that’s what the White House means here, the carve-out may be narrower than it appears. Indeed, a separate section of the framework reuses the term, advising Congress to avoid preempting “generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.” 

Thus, one interpretation of this preemption carve-out is that the White House wants states to continue enforcing general-purpose laws that incidentally reach AI-related child safety, while leaving AI-focused child safety laws open to preemption. This interpretation would stand in stark contrast to Sen. Marsha Blackburn (R-Tenn.)’s draft AI bill, which preserves any state child safety laws that offer greater protection to minors than the bill itself. It would also somewhat conflict with President Trump’s executive order that directed the creation of an AI policy framework, which specifically stated that the federal framework must not propose preempting “otherwise lawful State AI laws” related to protecting children. 

The framework also outlines how Congress should not preempt procurement requirements and other rules for how state-provided services use AI, with explicit emphasis on law enforcement and public education. This is meaningful, as states have begun passing laws focused on AI adoption in these contexts. Preempting state rules in the public sector could be politically fraught, but that hasn’t stopped Congress from trying: A near-final draft of Congress’s summer 2025 reconciliation bill would have preempted many state laws regulating AI systems, with an exception for procurement requirements only if they “streamline” procedures in a manner that “facilitates” AI adoption.

Arguably, the framework’s most notable preemption carve-out is zoning laws and other authorities that “determine the placement of AI infrastructure.” That carve-out is meaningful because local resistance to data centers has, in some cases, delayed or blocked projects. Resistance has also reached state legislatures, which enacted dozens of laws regulating data centers in 2025. Several states are even considering a temporary moratorium on data centers, which some local governments have already done. If data center resistance continues to expand, it could significantly slow AI progress in the United States.

However, the phrase “determine the placement” appears to refer to a relatively narrow category of land-use restrictions. The White House might claim that other data center laws are regulations on AI development and, therefore, fair game for preemption. Even without preemption, the Trump administration has shown interest in using federal lands for AI infrastructure, which could bypass state restrictions altogether.

Establishing New Federal Rules 

Though the framework’s preemption intentions have sparked some opposition, it also contains numerous substantive recommendations for AI policy. 

Children

The first section focuses on protecting children and includes two points discussing what Congress shouldn’t do. As discussed earlier, one recommendation is to refrain from preemption of “generally applicable” laws in this domain. Another is to avoid “setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.” Ironically, this statement is fairly ambiguous and open-ended; it can be construed to cover many different child safety policy proposals. For example, Blackburn’s draft AI legislation has attracted similar criticism, and the trade association NetChoice already invoked the statement in opposition to a state AI bill in Tennessee. Thus, the White House appears open to supporting a relatively small number of child safety proposals.

Besides not-to-dos, the framework also gives Congress some to-dos. Its very first recommendation is to “build on actions to date by the Trump Administration to protect children, including the historic signing of the Take It Down Act.” The Take It Down Act is legislation Congress passed last year to expand liability protections for publishing nonconsensual intimate imagery online, including sexually explicit deepfakes; it also requires most online platforms to establish a notice-and-takedown procedure for swiftly removing such content. However, the framework’s recommendation doesn’t describe any concrete actions to “build on” the Take It Down Act. Instead, it appears to serve a rhetorical function by reminding readers that the Trump administration has already taken an important action in this domain, where AI models such as xAI’s Grok have indeed contributed to harm. 

Another recommendation calls for Congress to “affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising.” This likely refers to Congress clarifying that existing laws, such as the Children’s Online Privacy Protection Act (COPPA), extend to AI. In particular, the Federal Trade Commission finalized a rule last year that already imposed some limits on using children’s data for AI model training, targeted advertising systems, and other algorithms. The Trump administration might support codifying some version of this COPPA interpretation into statute.

Additionally, Congress should “require AI platforms and services likely to be accessed by minors to implement features that reduce the risks of sexual exploitation and self-harm to minors.” These risks are real: AI chatbots are the subject of several lawsuits alleging contributions to teen suicides, and some chatbots have engaged in romantic conversations with minors. However, the White House’s call to “implement features” that reduce these risks leaves significant room for Congress to interpret.

One possible interpretation comes as two separate recommendations: Congress should “establish age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors,” and it should “empower” parents with tools to manage their child’s “privacy settings, screen time, content exposure, and account controls.” The policy vision seems to be that relevant AI services will check whether users are over the age of 18 and offer various parental controls if they are not. For example, White House AI adviser David Sacks praised both Apple’s and Google’s approaches to family accounts in an interview several days after the framework’s publication. However, the framework itself doesn’t commit to any particular methods for verifying a user’s age or parental relationship, stating only that age-assurance efforts should be “commercially reasonable” and protect privacy.

Communities

The framework continues with a section focused on safeguarding and strengthening communities. For example, Congress should “augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud” targeting seniors and other vulnerable groups. The underlying logic is straightforward: The government was already fighting scams and fraud before AI, and it should ensure those efforts account for how AI is changing the threat landscape. But the implementation details—especially the meaning of “augment,” which appears nowhere else in the framework—are unclear. Perhaps the White House envisions efforts similar to its March executive order combating scams and cybercrime with ties to transnational criminal organizations. The order mobilized relevant agencies to review regulatory frameworks, improve information sharing, and produce an action plan. Or perhaps the White House would support expanding liability for AI-related fraud, through bills such as the AI Fraud Accountability Act

Another pair of recommendations centers on AI infrastructure. First, the framework asks Congress to ensure that data center buildouts do not raise electricity costs for U.S. households. According to Michael Kratsios, the administration wants Congress to “codify the Ratepayer Protection Pledge,” a White House initiative that secured voluntary commitments from seven tech companies for the same purpose. Congress might follow through on this recommendation with the corresponding section of Blackburn’s draft bill, though many other relevant bills have been introduced in Congress. 

Second, the framework asks Congress to “streamline federal permitting for AI infrastructure construction and operation.” While this could be a broad call to expedite permits across data centers, energy resources, and chip supply chains, the specific mentions of power generation and grid reliability suggest that energy is the primary concern—especially resources that connect directly to nearby data centers (“on-site and behind-the-meter power generation”). One relevant bill that could draw White House support is the Speed Act, which the House passed in December 2025 to make general reforms to environmental permitting. But even without enacting the Speed Act, the Trump administration has already taken significant steps to streamline permitting through executive authority.

The communities section also calls for Congress to support small businesses’ AI implementation by offering grants, tax incentives, and technical resources. This recommendation could result in several bills becoming law, as the House has already passed three bills that would offer AI-related guidance—though no financial resources—to small businesses. Further, Trump may soon sign legislation reauthorizing the Small Business Innovation Research and Small Business Technology Transfer programs, which enable federal agencies to distribute billions of dollars annually to small businesses, including for AI-related projects.

Notably, the framework construes AI’s national security implications as an issue related to U.S. communities. Congress should ensure that appropriate agencies possess “technical capacity to understand frontier AI model capabilities and any associated national security considerations” and establish plans to “mitigate potential concerns.” Kratsios later clarified that the administration specifically wants the relevant national security agencies to have “the expertise and the skills” to evaluate AI models. This suggests a focus on hiring technical talent, though the framework emphasizes consulting with the private sector. It also remains unclear which agencies the White House has in mind, and which AI-related national security concerns it takes most seriously. 

Creators

Another section focuses on creators’ intellectual property rights, beginning with AI training on copyrighted material. While the administration acknowledges that counterarguments exist, it believes this practice “does not violate copyright laws.” This is a simpler stance than the U.S. Copyright Office took in its tentative legal analysis in 2025, released shortly before Trump fired the office’s leader. However, the administration also “supports allowing the Courts to resolve this issue,” and it urges Congress not to intervene.

The White House seems to expect that, even with some unfavorable rulings along the way, courts will ultimately produce better outcomes for AI developers than any politically viable legislation from Congress. If that forecast proves false, the framework outlines a contingency plan: It advises Congress to carefully monitor copyright developments in courts and “evaluate whether, due to novel AI considerations, additional action beyond that proposed here is needed to fill potential gaps” or further protect creators. “Fill potential gaps” may be a euphemism for weakening creators’ leverage against AI companies in disputes.

The intellectual property section offers two specific legislative proposals, though they are the only recommendations in the entire framework that state “Congress should consider” rather than “Congress should.” One proposal is to create “licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers.” The framework emphasizes that any such legislation “should not address when or whether such licensing is required”—reinforcing the pattern of deferring to courts—and should shield rights holders from antitrust liability in their collective bargaining, since competitors normally cannot coordinate to set prices. 

The other substantive proposal is to protect individuals from “unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes,” with some exceptions. This could be accomplished through legislation such as the No Fakes Act, which would create a federal right of publicity focused on digital replicas.

The Rest 

The remaining recommendations span several topics. For example, there is a short section focused on censorship. Congress should prevent the U.S. government from coercing technology providers to “ban, compel, or alter content based on partisan or ideological agendas.” Of course, there is some tension between this anti-coercion principle and the administration’s ongoing dispute with Anthropic. The White House also wants Congress to allow Americans to seek redress for agency efforts to “censor expression on AI platforms or dictate the information provided by an AI platform.” Much depends on how “censor expression” is defined, as a broad reading could curb even good-faith government efforts to prevent AI models from, say, assisting with cyberattacks.

Another section focuses on innovation. To start, the framework recommends establishing regulatory sandboxes for AI applications, an approach championed by Cruz’s Sandbox Act. The framework also states that Congress should “provide resources to make federal datasets accessible to industry and academia in AI-ready formats.” The White House may support the AI-Ready Data Act and the AI-Ready Bio-Data Standards Act, which were introduced in Congress in March and have similar objectives. Lastly, Congress “should not create any new federal rulemaking body to regulate AI, and should instead support development and deployment of sector-specific AI applications” through industry-led standards and existing regulatory bodies. The emphasis on supporting AI activity suggests that Congress generally shouldn’t regulate AI unless the associated burdens on industry are minimal. Additionally, by channeling federal oversight toward sector-specific regulators focused on downstream applications, this recommendation adds another shield against regulating AI development, complementing the preemption section’s restriction on states.

Just before preemption, the penultimate section focuses on education and workforce issues. Congress should ensure that existing federal programs “affirmatively incorporate AI training,” building on steps the administration has already taken. Additionally, Congress should support AI-related programs at land grant institutions. The section’s final recommendation is to “expand Federal efforts to study trends in task-level workforce realignment driven by AI,” mirroring the focus of Sen. Jim Banks (R-Ind.)’s AI Workforce Prepare Act. Overall, the White House seems focused on preparing Americans for future workforce demands, while carefully studying whether further action is warranted.

Notable Silences

Excluding the title page, the framework is three pages long. There are a little over 30 bullet points with recommendations for Congress. As discussed above, several of the recommendations are not-to-dos, others focus on preempting state laws, and many are open to interpretation. From the perspective of the AI industry, the most demanding recommendations are to ensure parental controls, protect children’s privacy and mental health, pay more for electricity, and coordinate with national security agencies. Perhaps the biggest threat to industry is the administration’s refusal to greenlight AI training on copyrighted data. Should a comprehensive federal AI bill be enacted exactly as described, it would be a highly industry-friendly law.

However, it’s unlikely that Congress will simply convert the framework into legislative text and pass it. The politics of AI includes many fierce debates, and this framework can be viewed as the White House’s North Star in subsequent negotiations. There will be opposition to some of the proposals and their industry-friendly nature. Blackburn’s discussion draft illustrates several ways in which members of Congress might disagree: It would sunset Section 230, create a duty of care for chatbot developers, block minors from using AI companions, and enact other measures that carry higher costs for industry.

Most importantly, there are a great number of important AI policy issues that the framework leaves largely unaddressed. Several high-stakes copyright questions are left to the courts. AI’s contributions to cyber and biological threats are lumped together under a single recommendation on national security concerns. In response to AI-related privacy issues, the framework offers some proposals to protect children, but nothing to protect adults. Safeguards against algorithmic discrimination receive implicit criticism in the recommendation that the federal government should not pressure providers to alter AI content based on “ideological agendas.” 

Other issues receive no attention at all. There are no proposals for Congress to pass laws governing federal procurement and adoption of AI, including in controversial areas such as domestic surveillance and autonomous weapons. There is nothing on technical approaches for verifying whether online content is AI generated or human generated. Nor are there any recommendations on export controls governing the semiconductor supply chains that shape global AI progress.

The elephant in the room is preemption. The White House is proposing to erase many existing state AI laws and prevent many more that could arise in the future. And although the White House has proposed many significant proposals for federal AI legislation, it has not come close to replacing every state law with a similar federal version. The resulting national framework, if passed, would reduce conflicts between state approaches—but it would also leave certain issues with significantly less policy coverage than they have today.

 


Jakub Kraus is a Tarbell Fellow writing about artificial intelligence. He previously worked at the Center for AI Policy, where he wrote the AI Policy Weekly newsletter and hosted a podcast featuring discussions with experts on AI advancements, impacts, and governance
}

Subscribe to Lawfare