Digital Freedom Depends on Access Rights
Published by The Lawfare Institute
in Cooperation With
Twenty-five years ago, legal scholar Lawrence Lessig warned that the architecture of technology—what he called “code”—could become one of the greatest threats to modern liberty, if not for the potential of markets and norms to moderate that risk. He turned out to be half right: Code has since done what he feared it might, but markets have only amplified these effects, and norms have done little to constrain them.
We’re now left with law as a last resort, but it may be enough to secure a future in which AI agents can act as a counterbalance to tech’s current trajectory. AI agents—systems that work intelligently and autonomously for a user—are the next step after large language models (LLMs). They work by turning the output of those models into an intelligible sequence of real actions. We now have a critical choice to make about what they will become: The legal framework we give them will either mean that this individualized power tempers the current lawlessness of our digital lives, or render us all the more subject to it.
Take privacy law, for example. Over the past century, it strengthened along parallel tracks: From the protection of sealed mail in 1917 to the unanimous warrant requirement of United States v. U.S. District Court in 1972, an understanding eventually emerged that governments couldn’t intrude into private lives without justification. Similarly, William Prosser’s torts in the 1960s became the foundation of private claims to be let alone. But now, in the face of digital technology, those gains are eroding.
As our lives move from spaces governed by government to spaces governed by private actors, the distinction between public and private is collapsing—imperiling our rights within each category. On the public front, the federal government has worked to merge datasets holding highly sensitive information, making them readily available to its workers. On the private front, for-profit data gathering has become endemic—yet Prosser’s torts have not been updated, nor federal law instantiated, to account for a completely distinct mode of private intrusion.
Because private actors control the digital spaces of our lives and set their rules, governments turn to them to do things they can’t on their own: Until public scrutiny shut the program down, law enforcement had been able to ignore warrant requirements for personal flight data by buying it in advance from airlines, and the Department of Justice can censor the speech of private citizens by asking Apple to remove apps it doesn’t like from the App Store. Private actors have power akin to government, and use it as they see fit.
These same capabilities that abet massive information collection also enable centralization and control, which is how we’ve ended up in a world where credit card processors decide in practice the categories that can or can’t be sold online and social media companies almost uniformly employ practices meant to hijack our attention and addict users.
The point isn’t that technology companies are misbehaving (though they often are)—it’s that their control of digital technologies gives them an unprecedented amount of power over our lives. We must find a way to wrest it back if we want to be free. AI agents provide a possible solution.
The Role of AI Agents
Consider these competing visions. A social media feed is an AI system trained to addict you; an airline’s pricing algorithm is often an AI system that dynamically changes fares to maximize your spending. They give power to platforms. A loyal AI agent does the opposite: It filters your social media feed as you prefer or shops flights across times and airlines to tilt the negotiating balance back in your favor. Without AI agents pushing back in this way, predatory AI systems will deepen their entrenchment of the former picture.
A specific category of AI agents called coding agents create software (instead of acting directly on a user’s behalf). In a few years’ time, most people will be able to quickly and cheaply generate custom software for themselves and others. But useful software depends on interactions with other systems and your data. The current rules, as set by the platforms, mean that this access will be capricious at best.
This, in turn, means that software that conflicts with the intentions of the major tech platforms will be blocked. As it stands, data brokers can package and sell your data and you have no federal right to intervene. Health providers obfuscate costs so that you cannot shop for the care you can afford. Content and social media platforms can permanently hold your content, without a right of rescission.
Platforms are using their power to deny personal AI agents access to the systems and data they need to work. Amazon blocks ChatGPT and other AI tools. Salesforce changed its terms of service to prevent companies like Glean, an AI search tool, from using data from Slack (which Salesforce owns), while negotiating deals for preferential rights to data on other platforms. This appears to be an ideal arrangement for tech’s giants: Shutting down challengers while using their market dominance to re-create the products they just crushed.
This was the core insight of the EU’s Digital Markets Act. Whatever its flaws, it squarely recognized that this kind of power translates into a form of private government, and that a citizenry without the capacity to resist cannot be said to be free. When our interests are aligned with a platform, this is less of an issue; but when they diverge, the human stakes can be exceedingly high. When children are driven to suicide by chatbots whose operators disclaim responsibility or private data collection creates the conditions for warrantless searches and detentions, liberty is on the line.
Narrowing our choices also creates bridges to other harms. Our data is used against us in an increasingly lopsided manner, especially when we can be individually targeted and algorithms render us legible to a degree that companies would be embarrassed to publicly admit.
Moreover, the legal tools that might have worked elsewhere are no longer effective in this sphere. Antitrust, to the extent that enforcement and remedies have any bite, corrects only for the abuses of monopoly. But the denial of access that interoperability rights are meant to reverse is an industrywide practice that has been sustained by convention and perpetuated by convenience and power. It’s not a consumer choice problem—at least not among platforms—when what’s lost isn’t the existence of a single interoperable platform, but the broad-based ability to exercise agency anywhere in our digital lives.
Similarly, non-discrimination rights—the primary tools to remedy denial of access in the offline world—are inert here, because this erosion of rights does not discriminate. Some of the impacts may be disparate in their effect, but this is a broad-based disenfranchisement. Focusing only on protected categories would be both oddly perverse and exceedingly hard to demonstrate absent the access rights that we’re fighting for in the first place.
What we need instead is a specific right of access to both our data and the systems that we depend on for digital life, so that platforms cannot discriminate against anyone on the basis of the tools they use for access, or against our desire to ensure that the data they have about us is what we want them to hold. It’s a frank departure from the present moment’s free-for-all, where platforms have very few obligations with regard to data access in the U.S. and where they can use their terms of service to legally prohibit the use of third-party software—which naturally encompasses AI agents—to access their services.
The Regulatory Landscape
The most prominent interoperability and data access regulations under consideration by Congress or at the state level make no surrender on this front. The bipartisan ACCESS Act, which was reintroduced by Sen. Mark Warner (D-Va.) and co-sponsored by Sens. Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), requires that platforms above a certain threshold create interfaces that third-party agents can use to evaluate and potentially remove personal data that has been collected by those platforms. It has been designed specifically to ensure that platforms are unencumbered in protecting users. Other bills being considered at the state level more directly encode a right of access.
Absent these rights, users are vulnerable to the whims of the platforms. It’s worth recalling that the ACCESS Act was first drafted in response to the Cambridge Analytica scandal, in which a consulting firm gained unauthorized access to the data of millions of Facebook users to target them for political advertising.
But what about the rights and claims of the platforms themselves? Worries about patchwork state regulation are valid, but the claim that over a thousand state AI bills have been passed is questionable, and the active funding of efforts to kill federal regulation, like the Meta and OpenAI anti-regulatory PACs, show that the claim is being made in bad faith.
There is no such thing as a neutral regulatory environment. Section 230 of the Communications Decency Act prefigured the modern internet. With the swipe of a holding, the Supreme Court ended the Chevron doctrine and narrowed the scope of federal agency decision-making. As Lessig noted, “When the interests of government are gone, other interests take their place.” We won’t have the capacity to represent our own interests against tech platforms if we buy the idea that government intervention is always bad (and private interests are always good).
This belief puts the interests of tech founders, investors, and some of their workers above those of their fellow citizens. It pretends away the incursions to liberty that come when we lose the cover of responsible government as we move through digital life. And it fails to recognize that a much richer vision of technology and society becomes possible when we protect fundamental rights while leaving markets to work.
***
We are at a crossroads. AI agents are giving individuals powers once exclusive to large tech platforms. What legal framework will govern them? Inaction—the current default—lets platforms decide, and the decades of experience since Lessig’s alarm show they will not prioritize personal freedom. The alternative is to learn from the internet’s early successes: open protocols and shared data that provide autonomy. We can establish structures that let personal AI agents help us reclaim control over the digital spaces where we increasingly live, work, and commune. Choosing not to will mean surrendering our freedom as we make one of our largest steps yet into the digital future.
