Toward a Federal Framework for Online Age Assurance
Age assurance legislation has stumbled amid breaches and backlash. Congress now has a chance to break this pattern. Here’s how.
It may well be the best of times and the worst of times for age assurance. On the latter end of the spectrum, in July 2025, the United Kingdom rolled out new online age-gating requirements. The move triggered a 500,000-signature repeal petition within days, and downloads of some virtual private network (VPN) services spiked by over 1,000 percent. More recently, after Discord announced it would begin requiring age verification in March 2026, users threatened to leave in droves. High-profile breaches have only deepened public skepticism: In February, age verification vendor Sumsub disclosed a July 2024 breach it had only just discovered—18 months after attackers first gained access.
Yet age assurance—which broadly encompasses efforts to determine a user’s age online—is also unmistakably in vogue, particularly within the United States. The steady drumbeat of whistleblowers, exposés, research, and tragedies—now compounded by emerging artificial intelligence (AI) risks—has pushed states such as Arizona, California, Ohio, and Texas to adopt aggressive new age verification mandates. And in Free Speech Coalition v. Paxton, the Supreme Court upheld Texas’s age verification law as only an incidental burden on adult speech, emboldening lawmakers who once doubted the policy’s legal footing. On the technical side, age assurance and digital credentials have seemingly come of age, with some now concluding that privacy-preserving methods are technically mature and deployable at scale.
This turbulence reflects four conflicting convictions: first, that children face unique harms online; second, that children retain meaningful rights to speak and to access information; third, that parents struggle to steward their children’s digital environments; and yet—fourth—we recoil at any intervention resembling surveillance and censorship.
Against this backdrop, the House Energy and Commerce Committee’s recent hearing discussing 19 online child safety bills, several of which contain federal age assurance frameworks, represents a serious attempt to grapple with these competing imperatives. Three bills contain explicit age assurance provisions—the Shielding Children’s Retinas from Egregious Exposure on the Net (SCREEN) Act, the App Store Accountability Act (ASAA), and the Parents Over Platforms Act (POPA).
Together, the bills reflect different approaches to balancing goals that have long seemed irreconcilable: protecting children without undermining free expression, empowering parents without overwhelming them, and distributing responsibility across app stores and platforms without constructing an unworkable compliance maze. They offer a meaningful starting point, and, with targeted refinements, Congress can achieve the balance that Americans are aiming for in their often-contradictory desires.
Where Should Age Assurance Occur?
Before refining any one model, Congress must resolve a foundational question: At what layer of the stack should age assurance occur? The current package advances three different possibilities: Under the SCREEN Act, age verification takes place at the platform level—the individual service hosting content deemed harmful to minors. In ASAA, age assurance happens at the app-store level: Apple and Google (or similarly scaled “covered app store providers”) verify age once, manage parental controls, and pass age signals downstream to app developers. POPA combines elements of both: It centers parental oversight in the app stores—but since app stores only have to collect unreliable self-attestations from the user, app developers are required to conduct their own age check for anyone who claims to be an adult if the app has any adult-only features. This would create redundancies, as many users would have to go through two rounds of age assurance under POPA because the app store is not required to provide an authoritative signal.
If Congress must settle on a single approach, implementing age gates at the highest feasible layer of the stack—the app-store or operating system level—is generally the most effective, privacy-conscious, and scalable approach. Of the proposals under consideration, the App Store Accountability Act offers the most coherent foundation. ASAA’s scope is limited to major gatekeepers with more than 5 million U.S. users—companies that already manage device-level identity, payments, parental controls, and security. They are well positioned to run an age check once, and to provide developers with standardized, interoperable age-category signals via an application programming interface (API). Additionally, this avoids pushing verification responsibilities onto thousands of smaller developers who may lack the technical capacity to properly protect users’ data and the capital necessary to afford costly age verification vendors.
Centralizing parental controls at the app-store level also meaningfully reduces the practical burden of monitoring and managing a child’s digital life. Few families can realistically configure dozens of app-specific controls, but they can vet downloads and manage permissions from a single dashboard. Even if the information app stores provide is relatively high level, parents are far more likely to use (and keep using) a unified control panel than a patchwork of app-specific tools.
This approach also eases enforcement. Enforcers can monitor a small number of large gatekeepers far more effectively than thousands of service-level actors. Thus, age assurance requirements at the level of sites and platforms—as POPA and SCREEN propose—could be easily circumvented, creating enforcement gaps that dilute their effectiveness. Louisiana’s Act 440, which required age verification for adult websites, provides an excellent case study. After the law went into effect, traffic decreased on sites that complied with the law, accompanied by substantial increases in traffic on noncompliant sites. Some of this exodus may reflect adults unwilling to submit to verification rather than minors evading restrictions, but it illustrates a basic enforcement dynamic: When compliance is uneven, users gravitate toward the least-regulated option. A federal framework that anchors age assurance at the app-store level would narrow these enforcement gaps substantially, reducing the incentives for users to migrate to opaque or offshore platforms.
However, even the app store approach will leave gaps. Parental controls are only as effective as the parents who use them. For most online content, that is the right trade-off: Congress should give families better tools, not try to centrally manage every aspect of children’s online lives. But there is a limited category of material—commercial pornography and other content that is obscene to minors—where parental vigilance alone may be insufficient, and where even small leaks through parental controls carry outsized risks. Additionally, many commercial pornographic services do not operate through app stores at all; both Apple and Google have (admittedly poorly enforced) restrictions on apps that contain porn. Instead, these services overwhelmingly operate through the open web.
For these high-risk services, a complementary, narrow, platform-level obligation of the kind the SCREEN Act envisions can function akin to a targeted browser-level backstop to ASAA. Congress should consider sharpening SCREEN’s scope from its open-ended “harmful to minors” formulation to more precise categories tied to existing obscenity and commercial pornography standards. These frameworks would not be redundant: Platforms in app stores could generally rely on the store’s age signal, while the SCREEN Act would primarily reach the web services that app stores do not.
Legislative Fixes to Cross-Cutting Issues
After settling on ASAA’s approach—with an attenuated, focused SCREEN Act to fill in the gaps—Congress should address several cross-cutting issues common to all frameworks.
First, ambiguous standards—which seemingly require dead-on accuracy in their plain text—risk pushing companies toward intrusive methods of age verification, including vetting of official documents and IDs. The SCREEN Act requires methods that “prohibit a minor from accessing” harmful content, which implies near-perfect certainty. The Parents Over Platforms Act conditions liability protections on undefined “good faith.” ASAA requires methods “reasonably designed to ensure accuracy,” with no guidance on what accuracy means in practice. These vague formulations are understandable, but they encourage risk-averse actors to gravitate toward the most legally defensible and intrusive method available.
Rather than asking a federal regulator to promulgate accuracy standards, Congress has already proposed a more flexible and politically viable mechanism: the Kids Internet Safety Partnership (KISP). The KISP Act directs the secretary of commerce to form a collaborative body that brings together industry, researchers, parents, state enforcers, and child-safety experts to
identify “widely accepted and evidence-based practices” for addressing online risks for children, which could serve as guidance for implementing statutory child-safety obligations in ways that are interoperable and privacy-protective.
Accuracy rates need not be absolute to be effective, because harm scales with age. We do not think of a 17-year-old and a 12-year-old as equally vulnerable. Policy should reflect this. Age estimation techniques that approximate a user’s age range without using government identification, such as facial age estimation, phone-number-based checks, and email verification, can be extraordinarily accurate for younger children, and are only increasing in accuracy—but they are less precise near the age-of-majority threshold. KISP is well positioned to help flesh out age-bracketed accuracy expectations that are more demanding for younger children than for older teens, freeing platforms to use highly effective non-ID age estimation methods.
Congress should make explicit that covered entities may satisfy their accuracy obligations by offering users more than one assurance pathway, including at least one non-ID method where feasible. Not all users have IDs, and many parents prefer less data-intensive alternatives. A well-resourced entity like a large app store is uniquely capable of offering users a menu of highly accurate age assurance modes. KISP can help clarify how different kinds of age checks can be deployed and layered in ways that meet statutory goals while respecting the wide range of user comfort levels. For example, several age assurance providers use a “waterfall approach,” which involves applying progressively more rigorous (and intrusive) methods only when initial, lower-friction methods are inconclusive. User choice is crucial: If people are forced into a single method they are uncomfortable with, many will simply opt out, chilling lawful speech and driving people away from compliant services, or toward VPNs that may pose a greater privacy risk.
Additionally, Congress should require app stores to provide an appeals process for correcting erroneous age determinations. If an adult is misclassified as a minor, users may lose access to protected speech and be unable to download certain apps entirely. Mandating that covered entities have a low-friction, timely mechanism for correction would not only improve consumer experience but also protect the bill from legal challenges. Timeliness is especially critical because data retained for manual review poses security vulnerabilities: Discord’s 2025 breach exposed government IDs that persisted in customer service systems for users who had appealed because their automated checks had failed.
This legislation must also address the privacy risks associated with age assurance. Whichever method is used, users must present sensitive information, such as biometrics, government-issued IDs, or other personal data. Retaining this information creates high-value targets for hackers—not necessarily on the regulated platforms themselves, but with the vendors they rely on. Two of the highest-profile age assurance breaches illustrate this risk: Discord’s 2025 breach involved its third-party provider while the other exposed AU10TIX, a vendor serving TikTok, X, Uber, LinkedIn, PayPal, Fiverr, and Coinbase. Congress should therefore require that app stores both refrain from retaining age assurance data and contract only with vendors that meet strict data minimization and deletion standards.
Regulators should ensure that effectively monitoring age assurance vendors for misconduct does not ironically heighten breach risk. Australia’s August 2025 age assurance trials found that some vendors were overcollecting personal information, anticipating they would need this data to meet regulators’ demands during future compliance investigations. Vendors absolutely need to maintain records to rebuff accusations of inaccuracy and bias, and to facilitate investigations of misconduct, but this does not have to come at the cost of users’ privacy. Legislation and guidance can make clear that enforcers’ needs can be met through less data intensive approaches, such as aggregate accuracy reporting and sampled audits, ensuring that compliance cannot be used as a pretext for retaining individualized verification records.
Policymakers can further reduce sensitive data exposure and the number of verifications a user performs by leveraging the rapid progress states and major technology companies have made in developing privacy-preserving digital identity tools. For each of these applications, a one-time identity verification produces an age credential that can be used for multiple services and contexts. Importantly, these tools do not let the government or tech companies track users’ activity, nor do they require platforms to learn a user’s identity. Louisiana’s LA Wallet, for example, enables residents to send platforms a binary age signal (minor or adult) derived directly from the user’s drivers license, without revealing any other identifying information. The state’s system does not see which website initiated the request; instead, age challenges are routed through an intermediary and processed over a secure channel. The platform and the LA Wallet app never communicate directly, ensuring that neither party can infer anything about the user’s identity beyond the fact that they meet the required age threshold.
Similarly, Google Wallet and Apple Wallet use zero-knowledge proofs, which allow users to prove they meet an age threshold without revealing any additional information. In both systems, the user’s ID and the data derived from it are stored on users’ devices—not with Google or Apple—which means that companies cannot track when and where individuals use their IDs. Congress should allow app stores to rely on trusted signals like these that meet strict privacy and double-blindness standards as a permitted age check input, rather than insisting they rebuild the entire assurance stack themselves.
But even with robust data deletion and data minimization efforts, privacy concerns about traceability remain. Under ASAA, developers need to know a user’s age status in real time in order to ensure they solicit parental consent for in-app purchases and notify minors’ parents of significant changes to the app, which means app stores will have to occasionally send age and parental consent signals to apps after an app is downloaded. Even if a discrete age signal is not identifying on its own, apps will need to maintain age records associated with users, and the repeated use of a stable, account-linked identifier across contexts can create a mosaic that allows minors’ identities to be inferred or their online behavior to be reconstructed.
A more privacy-protective approach—reflected in standards abroad such as France’s requirement that platforms offer at least one double-blind, unlinkable age assurance option—is for app stores to transmit short-lived cryptographic tokens that convey only the necessary age category and whether parental consent has been provided, without being traceable to broader activity. This is not hypothetical: Anonymous credential systems (such as IBM’s Idemix and Microsoft’s U-Prove), selective-disclosure standards (such as the Internet Engineering Task Force’s SD-JWT), and token-based protocols (such as Privacy Pass) already allow issuers to attest to properties without revealing identity or creating a cross-site identifier. Congress should not prescribe a specific architecture, but it should encourage app stores to transmit age signals using unlinkable methods. Congress can also rely on KISP to develop best practices for unlinkable age signaling and token expiration.
On paper, the recommendations in this piece require that vendors produce highly accurate, unlinkable age signals and not retain any age assurance data, but this does not actually guarantee compliance. Even if centralizing age assurance obligations to a few app stores makes monitoring for compliance easier, enforcement is still slow and spotty, and we can’t wait until a breach happens to remediate issues. Indeed, independent evaluations have uncovered troubling gaps between vendors’ stated policies and their actual practices: Yoti, a leading age assurance provider, was found to be tracking users and transmitting personal data to third-party ad networks without consent, while AgeGO—despite claiming to offer “double anonymity”—was caught collecting URLs of videos users watched, along with webcam feeds and IP addresses.
App stores wouldn’t be able to use a vendor’s certification with international standards that cover privacy, accuracy, and data minimization like ISO 27001 or ISO 27566-1 as a preemptive indicator for compliance either. These credentials are usually point-in-time snapshots that verify documentation and policies rather than ongoing operational practice. Consequently, “compliant yet breached” is a familiar pattern: AU10TIX had been ISO 27001 certified for four consecutive years when it suffered a massive breach in June 2024. To address this gap, KISP’s playbook should include guidance on how app stores can vet vendors beyond spot check certifications—recommending, for example, that contracts require ongoing access for independent technical audits of actual data flows. KISP’s biennial reports should also evaluate not just the adoption of age gating but their real-world effectiveness, flagging vendors whose operational practices diverge from their commitments.
Legal Scrutiny
Getting enforcement right matters, but it may become moot if the underlying framework cannot survive constitutional scrutiny. A recent federal court decision suggests that the optimism with which this article began may need to be tempered. While Free Speech Coalition v. Paxton affirmed that platform-level age verification for pornography survives constitutional scrutiny, a December 2025 federal court preliminary injunction against Texas’s App Store Accountability Act suggests that app-store-level age assurance implicates children’s speech rights differently. The court found that requiring parental consent for all app downloads was not narrowly tailored because the vast majority of apps do not contain speech unprotected for minors. Unlike pornography sites, where the restricted content is itself outside First Amendment protection for children, app stores gate access to dictionaries, news outlets, fitness trackers, and therapy apps alongside whatever harmful material they may also host.
Whether frameworks like ASAA can survive similar challenges may turn on how courts characterize the state interest. If the bill’s goal is to restrict children’s access to specific harmful content, broad parental-consent requirements are likely overinclusive. But if the goal is to give parents meaningful tools to oversee their children’s digital lives—without requiring legislatures to first catalog which content is harmful—the court’s reasoning may differ. Time will tell.
This distinction matters because many of the harms and adverse effects of online life that we worry about cannot be arbitrated objectively yet. Many parents’ worries are diffuse, cumulative, and often invisible until the damage is done—much less universally acknowledged and more difficult to establish empirically than exposure to pornography. Jonathan Haidt’s “The Anxious Generation,” published in 2024, marshals extensive evidence that social media caused a surge in adolescent anxiety and depression, but the harms he documents emerged in the early 2010s. It took more than a decade for the research to catch up. That analysis predates short-form video platforms like TikTok, parasocial AI companions, and whatever will come next. By the time we have incontrovertible longitudinal data on those technologies, another generation of children will have been the experiment.
The pace of innovation in capturing children’s attention will always outstrip our capacity to study its effects. The strength of this proposal is that it does not presumptuously declare what content is harmful to minors. Instead, it attempts to build out an infrastructure that empowers parents to make those judgments for their own families, without sacrificing privacy, anonymity, and the promise of an open internet. This is not an evasion of the hard questions, but rather an admission that we cannot answer them fast enough—and an argument that we should not have to.
