Cybersecurity & Tech

Ted Cruz Has a Detailed Plan to Loosen AI Regulations

Jakub Kraus
Monday, January 26, 2026, 10:05 AM

The Sandbox Act would enable AI policy experimentation, part of a broader movement to remove constraints on the technology’s advancement.

U.S. Senator Ted Cruz of Texas speaking at the 2018 Conservative Political Action Conference in National Harbor, Maryland. (Gage Skidmore, https://flic.kr/p/24DyXZJ; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/)

On artificial intelligence (AI), Sen. Ted Cruz (R-Texas) may be the most influential policymaker in the United States Senate. He shepherded the only major AI bill that Congress has passed since ChatGPT’s release in November 2022, and as chair of the Senate Committee on Commerce, Science, & Transportation, he controls the fate of many more: Approximately 37 percent of all AI bills in the Senate have fallen under his committee’s jurisdiction this Congress, compared to just 16 percent and 9 percent, respectively, for the next two most active committees. He is also one of only six Republican senators to have articulated an AI policy framework.

That framework makes his priorities clear: deregulation. Where some lawmakers have pushed for stronger oversight of AI, Cruz wants to cut federal rules, state laws, foreign regulations, environmental permits, and government “censorship” of AI-generated content. Each of these priorities connects to broader movements already reshaping U.S. tech policy.

In September, Cruz introduced the first step toward implementing his vision: the Strengthening Artificial intelligence Normalization and Diffusion By Oversight and Experimentation (SANDBOX) Act. If it passes, the bill could reshape how the federal government regulates everything from credit decisions to self-driving cars.

The Sandbox Act’s Blueprint for Deregulation

At its core, the Sandbox Act would create a formal mechanism for companies to temporarily—and perhaps permanently—operate without complying with certain federal regulations that govern their AI-related activities. The process has three main parts.

First, companies would submit applications to temporarily waive specific federal regulations and guidance documents. Relevant federal agencies would then review these applications and decide whether to approve them, considering factors such as companies’ plans to “reasonably mitigate” any “reasonably foreseeable risks” that a regulatory exemption might yield. If an agency denied a waiver request, however, companies wouldn’t have to accept that decision—they could appeal to the White House Office of Science and Technology Policy (OSTP), which would have the final say over an application’s fate. Regulatory waivers would last for two years or less, but businesses could request up to four additional two-year renewals from OSTP, potentially amounting to 10 years in total.

Second, OSTP could establish streamlined procedures for companies seeking to waive particular regulations. The only constraint is that OSTP would need approval from the agencies that enforce or implement those regulations. But this obstacle is quite surmountable: If agencies rejected OSTP’s request, OSTP would have the power to appeal the rejection and decide whether to approve its own appeal.

The bill’s third major component tasks OSTP with submitting an annual “special message” to Congress, noting regulations that Congress should permanently rescind or amend based on observations from the sandbox program. Congress could make these recommendations permanent through a streamlined legislative process that requires only majority votes in the House and Senate—no filibusters or getting stuck in committee. Unlike the Congressional Review Act, which allows Congress to overturn recent rules, the Sandbox Act would let lawmakers target old regulations and make precise adjustments.

The Sandbox Act’s Reach and Limits

Critics of the legislation argue that the bill would leave consumers vulnerable to AI risks. However, it is important to be clear about what the bill does and doesn’t cover. It defines a covered provision as practically any federal agency regulation, including regulations that Congress expressly required agencies to adopt. However, the sandbox program cannot waive federal statutes, rescind state laws, or modify private rights of action that allow consumers to sue. Criminal liability also remains for offenses not expressly covered by a waiver.

Another reason the bill may prove less transformative than it seems is that the Trump administration has already moved to rescind some of the regulations that the sandbox program would target. During Trump’s first week in office, he revoked President Biden’s AI executive order and ordered a review of all actions taken pursuant to that order, aiming to resolve conflicts with the administration’s policy goal to “sustain and enhance America’s global AI dominance.” Within a month, the Department of Labor and the Equal Employment Opportunity Commission removed AI-related guidance documents from their websites. More generally, the Trump administration has pursued several strategies encouraging deregulation across the board, including a directive for agencies to plan to revoke 10 existing regulations for every new one they issue.

Still, there are plenty of regulations that companies may attempt to waive, as highlighted by the responses to OSTP’s request for information on federal rules hindering AI development and adoption. For example, the Chamber of Commerce recommended relaxing the requirement for credit lenders to explain why an application for credit was denied—some AI models are difficult to understand fully, so this rule limits their use. Business Roundtable urged the Federal Reserve to ensure that generative AI is not subject to risk management guidance from 2011 for banks using quantitative models. The Foundation for American Innovation highlighted that existing regulations in the transportation sector often assume that humans will be operating regulated vehicles, which could slow the advancement of autonomous cars, drones, ships, and more.

The Deregulation Rationale

Many of Cruz’s public arguments in favor of the Sandbox Act also apply to deregulating AI in general, not just with regulatory sandboxes. In an op-ed supporting the bill, Cruz emphasized U.S.-China competition, arguing that allowing China’s AI industry to lead the world risks “a global order where freedom is eclipsed by state-run surveillance and coercion.” While his statement may be somewhat rhetorical, it’s true that AI leadership could benefit U.S. national security and economic growth. And it’s true that deregulation can sometimes support AI development significantly. But other effective strategies for strengthening America’s position vis-a-vis China involve adding rules, not subtracting them. For example, the CHIPS Act is a fairly complex piece of legislation that few would call “deregulation,” yet its policies boost U.S. competitiveness in AI. Similarly, Sen. Tom Cotton’s (R-Ark.) proposed Chip Security Act would require advanced AI chips to include location-tracking features to improve export-control enforcement—a regulation that could help America maintain an edge in computing power. Even direct constraints on AI companies might increase public trust and adoption by spurring innovation in safety. 

Another case for deregulation appears in a 2024 op-ed by Cruz and former Sen. Phil Gramm (R-Texas) titled “Biden Wants to Put AI on a Leash.” The piece opens by praising former President Clinton’s light-touch approach to the early internet. As president, Clinton published the Framework for Global Electronic Commerce, which espoused principles like “The private sector should lead” and “Governments should avoid undue restrictions on electronic commerce.” He also signed the Telecommunications Act of 1996, giving online platforms a liability shield known as Section 230, and signed the Internet Tax Freedom Act of 1998, which temporarily barred new state taxes on internet access and certain online sales. Cruz and Gramm contend that this “hands-off approach ... unleashed extraordinary economic growth and prosperity.” The argument carries weight, but early federal internet policy was not uniformly laissez-faire: The Justice Department’s antitrust case against Microsoft restricted the behavior of a dominant business; the Digital Millennium Copyright Act included both regulatory and deregulatory provisions; and the Children’s Online Privacy Protection Act imposed meaningful privacy requirements. It is also worth asking whether early internet policy approaches continued to yield “extraordinary” benefits after the rise of social media. A 2024 survey found that most U.S. teens ages 13 through 17 use social media, and among those users, 56 percent say it hurts their nightly sleep, while only 5 percent say it helps.

Cruz has extended his reading of early internet policy further, arguing in a hearing that while Clinton pursued deregulation, “EU countries pursued a series of heavy-handed regulations” that allowed the U.S. to produce far more global tech companies. This narrative glosses over details. In recent years, large tech firms have faced substantial fines for violating EU laws. Further, a 2025 survey commissioned by a trade association found that 59 percent of small European tech businesses developing AI reported that government regulation has delayed their AI product development, compared to just 44 percent of U.S.-based developers. But these are relatively recent phenomena. Yes, a few relevant EU regulations took effect before the digital economy exploded, such as the 2002 ePrivacy Directive, which may have dampened venture capital investment in online advertising companies—but most major EU tech regulations emerged long after American tech giants had grown dominant, so they cannot fully account for the transatlantic tech gap.

Where Cruz has a stronger point is that when EU-wide policy is absent, European businesses seeking to scale their services must navigate a multilingual patchwork of 27 different regulatory environments across EU member states. More broadly, policy can influence innovation through channels that discussions of “EU tech regulation” often overlook: visa pathways, investment rules, liability frameworks, copyright regimes, zoning laws, tax rates, tariffs, and so forth. Direct constraints such as the ePrivacy Directive certainly mattered, but many other important factors limited Europe’s ability to develop a global-scale tech sector.

Beneath these specific arguments seems to lie a belief that AI development is generally good for the world. For example, in Cruz’s op-ed supporting the Sandbox Act, he highlights how AI-driven diagnostic tools can help patients recover from strokes. Indeed, given the general-purpose nature of AI technology, advancements could unlock a wide range of opportunities across sectors such as health care, education, transportation, scientific research, and beyond. These plausible benefits underscore an important point: When AI regulations slow innovation, real people bear real costs.

One argument for deregulation that Cruz does not emphasize is that regulations should evolve to keep pace with technological change. Many old rules extend to AI in practice, but this does not mean they suit the new technology well. Deregulation promises to remove or amend laws that simply don’t make sense for AI, such as transportation regulations that assume human drivers. However, there is a wide gap between removing laws entirely and adapting them to meet the moment—arguably, self-driving cars need new standards, not no standards. Cruz’s Sandbox Act allows Congress to do both, but it does not seek to encourage modification over revocation. This may be a mistake.

The Broader Deregulatory Movement

Cruz released the Sandbox Act as the first step toward implementing his broader AI policy principles, which he released alongside the bill under the heading, “Pillars for a Light-Touch Regulatory Framework for AI.” True to that title, Cruz’s framework emphasizes several forms of deregulation for AI, each reflecting broader movements already underway in U.S. tech policy.

The first principle is to create an AI regulatory sandbox program at the federal level. This is precisely the goal of the Sandbox Act, but it’s not a novel concept: The world already has several AI regulatory sandboxes that are either operational or in the works, in places such as Spain, Brazil, Utah, Singapore, the U.K., and Texas. Arguably, the most significant effort is in Europe, where the AI Act requires member states to establish regulatory sandboxes at the national level by August 2026.

Another principle is to streamline permitting for AI infrastructure, which again connects to efforts beyond Cruz’s office. The Trump administration’s AI Action Plan made a similar call, defining infrastructure as “factories to produce chips, data centers to run those chips, and new sources of energy to power it all.” It praised the president’s focus on “energy dominance,” which has been backed up by many executive orders—though this agenda often excludes or harms solar and wind projects. Congress has also been involved on the energy front under both Biden and Trump: The One Big Beautiful Bill Act allowed energy projects to pay fees in exchange for shorter environmental reviews; the Accelerating Deployment of Versatile, Advanced Nuclear for Clean Energy (ADVANCE) Act took steps to streamline regulations governing nuclear power; and the Building Chips in America Act waived environmental requirements for chip manufacturing projects under the CHIPS Act. Both Biden and Trump also issued executive orders expediting the construction of AI data centers.

Cruz’s framework also endorses “Anti-Jawboning” and the need to “Stop Government Censorship,” which links AI to ongoing debates about content moderation and free expression online. These debates have unfolded across multiple fronts. In 2024, the Supreme Court held that plaintiffs lacked standing to challenge the Biden administration for allegedly pressuring social media platforms to remove certain content. During his second term, President Trump has secured tens of millions of dollars from Meta and YouTube to settle litigation over their suspensions of his social media accounts. The Trump administration has also shuttered several government projects combating foreign disinformation and made plans to review tourists’ social media histories. Yet until now, these battles have been fought almost entirely over human speech. How the First Amendment applies to AI models and their outputs remains largely uncharted.

Moving beyond the U.S., Cruz’s framework also seeks to “Counter Excessive Foreign Regulation of Americans.” Again, there’s a clear connection to President Trump, who issued an executive order last year outlining plans to impose tariffs and take other “responsive actions” when foreign governments impose significant burdens on U.S. companies. In August 2025, Trump wrote a social media post explicitly threatening to impose “substantial additional Tariffs” and institute export controls on America’s “Highly Protected Technology and Chips” if countries did not remove “Digital Taxes, Digital Services Legislation, and Digital Markets Regulations.” Months later, after trade talks with European officials, U.S. Secretary of Commerce Howard Lutnick told reporters that the EU should “analyze their digital rules” and “find a balanced approach that works with us” in order to earn lower tariffs on steel and aluminum. The Trump administration has also lobbied to ease the EU AI Act’s requirements.

Besides federal regulatory sandboxes, speedy infrastructure permits, minimal information controls, and lighter foreign regulations, Cruz also seeks to “Clarify Federal Standards to Prevent Burdensome State AI Regulations,” which likely refers to the ongoing debate over federal preemption of state AI laws. Over the summer, Congress considered adding a provision to the One Big Beautiful Bill that would have prevented states from enforcing certain AI laws for a decade. After efforts to include preemption language in the annual defense bill fell through, President Trump issued an executive order directing agencies to curb state AI laws that “threaten to stymie innovation.” The number of significant state AI laws continues to grow with the passage of California’s Senate Bill 53, making this debate increasingly consequential.

Together, these principles form the deregulatory core of Cruz’s vision. That said, Cruz’s AI framework does not entirely ignore risk. The clearest example is his call to “Protect Americans Against Digital Impersonation Scams and Fraud.” He also aims to “Expand Take It Down Act Principles to Safeguard American Schoolchildren.” However, what this means in practice is unclear: The Take It Down Act, which Cruz led into law, already protects schoolchildren against sexually explicit deepfakes. And while Cruz seeks to “Defend Human Value and Dignity,” his associated recommendations to “Reinvigorate Bioethical Considerations in Federal Policy” and “Oppose AI-Driven Eugenics” are somewhat ambiguous.

*   *   *

With his Sandbox Act and AI framework, Cruz has become a leading voice in broader debates over how much room to give the technology to grow. His answer, in his own words in May 2025, is fairly simple: “To lead in AI, the United States cannot allow regulation, even the supposedly benign kind, to choke innovation or adoption.”

Whether that approach proves wise depends on questions that are difficult to answer. How much are existing regulations actually slowing beneficial AI activity? Will consumers face new risks if those rules are waived? And when old laws fit poorly with new technology, is the right response to eliminate them entirely—or to adapt them to the present?

These are the choices that U.S. policymakers are already beginning to confront. The decisions they make will shape not only America’s competitive position but also how AI touches the lives of ordinary people. Cruz has laid out one vision, but its merits may not be clear until long after the sandbox experiments have run their course.


Jakub Kraus is a Tarbell Fellow writing about artificial intelligence. He previously worked at the Center for AI Policy, where he wrote the AI Policy Weekly newsletter and hosted a podcast featuring discussions with experts on AI advancements, impacts, and governance
}

Subscribe to Lawfare