Cybersecurity & Tech

To Read This, Please Upload Photo ID

Isabel Arroyo
Thursday, February 12, 2026, 2:00 PM

A primer on digital age assurance methods and a survey of the laws—both enacted and proposed—requiring them.

Social media apps. (Bastian Riccardi, https://www.pexels.com/photo/close-up-of-a-smartphone-screen-displaying-social-media-app-folder-15406294/; Public Domain).

Around the world, lawmakers are responding to public concern about the attention span, mental health, and online safety of children and teenagers with regulations limiting how young people interact with digital services.

Recent laws in the United Kingdom, Europe, Australia, and the United States have restricted children from making accounts on social media or required that they secure a parent’s permission first. Other laws have aimed to restrict kids’ access to digital services with “addictive features.” And some narrower laws specifically bar children from viewing online pornography. In Australia, a recent law requiring social media platforms to flat-out ban users under 16 garnered significant international attention, leading Malaysia to adopt similar legislation and lawmakers in the European Union to propose bolstering their own age-based digital restrictions still further.

In the U.S., many state laws restricting the content, services, and features that digital platforms make available to children have faced successful First Amendment challenges, particularly from the internet trade association NetChoice. But some state laws—particularly those targeting online pornography, which minors do not have a constitutional right to view—have survived, and several more state legislatures are now considering similar bills. At the federal level, proposed legislation such as the Kids Online Safety Act, the Kids Off Social Media Act, and the App Store Accountability Act would limit the sort of content and features that digital service providers are allowed to make available to teens and to younger children.

Protecting children online is extremely important. But to restrict access for young users, platforms, sites, and other digital services first have to figure out the age of all users. That requires using some form of age assurance, and online age assurance methods available today are both unacceptably invasive and troublingly insecure. This piece provides a background to the three major forms of online age assurance, explaining the risks each one poses to the privacy of users of all ages, to free online expression, and to the broader architecture of the internet.

What Is Age Assurance?

Age assurance is an umbrella term for efforts to determine that a user falls in a given age bracket. Age assurance methods generally fall into one of three buckets: age-gating, age estimation, and age verification—although legislators sometimes confusingly use “age verification” to refer to all three, or “age-gating” to refer to the general process of making digital content inaccessible to different age groups. Age assurance requirements are often directly included in bills requiring platforms to treat adult and younger users differently.

Age-gating

Also known as self-declaration, age-gating means requiring users to self-declare their age or birth date before accessing a site. YouTube, pre-Meta Facebook, many pornography websites, and other platforms use or once used this mode of age assurance.

Compared to more sophisticated age assurance methods, age-gating is a poor mechanism for blocking young users because minors can easily claim to be older than they are. Several recent laws, regulations, and standard-setting bodies have discouraged age-gating as a mode of age assurance: For example, the European Data Protection Board’s 2025 statement on complying with the General Data Protection Regulation (GDPR), Digital Services Act (DSA), and Audiovisual Media Services Directive (AVMSD) explains that an organization’s age assurance compliance will—among other criteria—be evaluated on “robustness,” and that “robustness has little meaning in the context of the self-declaration of an age-related attribute.” The U.K. Office of Communications—which regulates the broadcasting, internet, telecommunications, and postal industries—clarifies that age-gating does not qualify as one of the “highly effective” modes of age assurance required by the Online Safety Act of 2023. Australia’s latest codes of practice for the online industry affirm the same thing.

But age-gating is not out of favor everywhere. The Federal Trade Commission’s Children’s Online Privacy Protection (COPPA) Rule, which requires covered websites and online services to handle data differently for children under 13, still allows covered services to rely on self-declared age information to determine users’ age. And California’s Digital Age Assurance Act, set to take effect in 2027, leans on age-gating by requiring that operating system providers—such as iOS, Android, and Windows—ask the user or their guardian to enter the primary user’s age bracket during account setup on new devices. Based on the age entered, devices belonging to minors can then send an anonymous “signal” conveying the user’s underage status to app store applications and digital services the user engages with, sparing those services the burden of verifying user age themselves.

Of the three age assurance methods, age-gating poses the least threat to user privacy and free online speech.

Age Verification

The second form of online age assurance is age verification, which means requesting documentation to prove a user is above a certain age. That documentation can be a government-issued ID, but it can also be credit card data, other financial documents, or transaction data such as mortgage, education, and employment history. By nature, age verification documents tend to be identity-revealing.

In the United States, a spate of recent state laws has confirmed that covered digital services can comply with strict age assurance requirements through age verification. Such laws—passed in Louisiana, Wyoming, Oklahoma, Idaho, Kentucky, Nebraska, and many other states—provide that covered digital service providers can meet age assurance requirements either by directly checking users’ physical or digitized government-issued IDs or by adopting forms of “commercially reasonable” age verification that use either official IDs or publicly or privately available transactional data such as mortgage, educational, or employment records. Commercially reasonable age verification includes outsourcing to commercial age verification services. Some state laws add that a covered platform can comply by consulting a commercially available database regularly used by government agencies and businesses for age and identity verification.

Some of these state laws have been enjoined, particularly where they seek to regulate social media or an app store. But other state laws requiring age assurance—and enshrining age verification as a compliant form of age assurance—have prevailed. Most notably, in the 2025 case Free Speech Coalition v. Paxton, the Supreme Court upheld a Texas law requiring that online pornography platforms either check users’ IDs digitally themselves or comply with a commercial system that verifies age with users’ IDs or transactional data. The Court also found that the burden this requirement placed on adults’ capacity to access adult content triggered only intermediate constitutional scrutiny from the Court, rather than the strict scrutiny typically applied in First Amendment cases. The ruling in Paxton weakens challenges to extant and future state laws that require age assurance—and that specifically encourage age verification—for viewing pornography specifically.

In Australia, as of December 2025, regulations meant to block minors from viewing pornography and “high-impact violence” require search engines and internet service providers to implement “appropriate age assurance measures” for account holders before allowing them to browse the web; a second spate of regulations scheduled to take effect in March will require websites, social media platforms, digital storage services, AI chatbots, and app stores to do the same. Guidance from Australia’s eSafety Commissioner indicates that compliant measures include several modes of age verification—including matching users to photo IDs, checking user credit cards, and consulting digital identity systems—as well as parental vouching and certain kinds of age estimation. Australia’s separate law banning kids from social media seems to consider age verification with a government ID an acceptable way for platforms to meet age assurance requirements but requires that users be given other ways to prove their age as well.

In the U.K., regulators have clarified that age verification in the form of photo ID, credit card-checking, digital identity wallets, and “open banking” are forms of “highly effective” age assurance compliant with the Online Safety Act (OSA). In Europe, many countries have passed laws requiring platforms to conduct robust age assurance, often integrating or planning to integrate this age assurance with existing and planned national digital identity wallets. These laws often do not mandate a specific kind of assurance approach—though Germany’s explicit list of approved verification methods is an exception—but their robustness requirements often functionally limit compliance to age verification or age estimation.

At the European Union level, a combination of binding instruments like the GDPR, AVMSD, DSA, and AI Act as well as complementary nonbinding instruments like the Better Internet for Kids+ (BIK+) strategy call for platforms to treat minors differently from adults, but age assurance requirements are still under construction. In 2025, the European Commission released a prototype for a planned single age-verification app that would discern a user’s age category based on their passports, national electronic IDs, or European Digital Identity Wallets, a form of ID scheduled to roll out by the end of 2026. After registering the user’s age category, the app would then be able to communicate that age category to platforms the user attempts to access without revealing the user’s identity.

In jurisdictions where the burden of age assurance falls on them rather than operating systems or official government apps, platforms often outsource to third-party services that conduct document-based verification. TikTok, for example, now partners with ID verifier Incode; X uses Persona and Stripe (and, until recently, AU10TIX); Meta and Spotify use Yoti; Discord uses k-ID; and Reddit outsources to Persona.

Verification firms—and, when they opt to do verification in-house, platforms themselves—are thus responsible for protecting the troves of identity data users upload. These troves are highly coveted by identity thieves, putting users at risk of identity exposure and identity theft. 

This risk is not speculative. In 2025, a hack of a firm contracted by Discord for age verification may have exposed up to 70,000 users’ government-issued ID photos. The 2025 breach of Tea—a dating safety app where women share information and warnings about potentially dangerous partners—exposed roughly 13,000 ID photos and selfies that had been collected for in-house verification purposes (though primarily to verify identity and gender, not age). In 2024, irresponsible storage of administrative credentials by AU10TIX—an identity and age verification service based in Israel whose customers included TikTok, LinkedIn, Paypal, Bumble, and Uber—was found to have left users’ verification information vulnerable online for over a year. Some of that information reportedly ended up posted on Telegram.

Some U.S. state laws acknowledge these privacy and data security dangers: A provision in Florida’s recently un-enjoined social media ban for under-14s, for example, requires that platforms “protect personal identifying information” from unauthorized access; Vermont’s passed-but-not-yet-in-effect age-appropriate design code law will limit age-related data collection to only what is “strictly necessary” to check age; and Georgia’s enjoined social media law SB 351 requires platforms to delete parents’ identification data after collecting it to confirm parental consent. A Kentucky law forbids platforms and third-party verifiers from retaining identifying information collected for age assurance, while Missouri requires that verifiers employ commercially reasonable methods to secure it.

But identity data is impossible to protect by legislative fiat, even if that fiat includes requirements for deletion. At the time each was breached, Discord and Tea both assured users that ID photos were deleted after verification. Ultimately, the safety of users’ uploaded documentation depends on the quality of technical protection put in place by third-party age verifiers or digital services themselves. As the pornography conglomerate Aylo accurately put the problem in a 2025 recommendation to Canadian legislators, “[n]ot every platform or 3rd party age verification service will handle data in the correct manner and hacks, fraud, and identity theft will occur.”

The threat created by amassing identifying user data in one place is, of course, not limited to breaches and identity theft. Once collected systematically, identifying documentation could also be made available for governments and law enforcement interested in linking individuals with browsing and consumption habits, social media posts, and anything else they do on an age-restricted service that requires uploaded information, which raises civil liberties and free expression concerns. Requests from law enforcement can also induce verifiers to retain identity-revealing age verification data beyond the time when they would normally delete it.

Another civil liberties concern is that verification requirements can “lock out” users who lack a recognized form of ID. Per a 2024 analysis by the University of Maryland’s Center for Democracy and Civic Engagement, around 21 million voting-age U.S. citizens lack an unexpired driver’s license and 7 million lack an unexpired government-issued photo ID entirely. Though exact numbers are difficult to come by, a substantial portion of the United States’s roughly 13.7 million undocumented immigrants lack U.S. identification. Worldwide, roughly 850 million people do not have any form of ID at all. Where age assurance laws require platforms to check identity documents that a prospective user does not have—or where requirements have induced a platform to contract with a third-party verifier that accepts only a limited range of identity documents—users risk being shut out of digital participation.

Finally, generations of scholars and jurists have recognized the importance of anonymity and pseudonymity to the protection of First Amendment rights. Damaging the real or perceived anonymity of online activity by requiring users to submit age-verifying documents that necessarily reveal their identities is exactly the sort of requirement likely to chill the anonymous, uninhibited speech that enriches discussion in a democracy.

Age Estimation

The third form of age assurance is age estimation, which means making educated guesses about a user’s age. Age estimation tools might use artificial intelligence to scan a user’s biometric information—often their facial features—in images or video selfies. Estimation tools might also analyze a user’s browsing history or chatbot use, or compare a user’s email address or phone number to other sites across the internet with a record of those same email addresses or phone numbers. In the last case, for example, an estimation tool might find that an email associated with a gambling site or used to access SAT scores through the College Board 7 years ago belongs to an adult, while an email made 8 weeks ago that has been used exclusively on Cool Math Games probably belongs to a minor.

Age estimation allows platforms to comply with laws like Australia’s ban and Louisiana’s (enjoined) Act 456, which require platforms to provide alternatives to ID-based age verification. A wide range of services—especially those operating in the U.K. under the OSA—have implemented or announced plans to implement age estimation technologies as an alternative to age verification mechanisms. 

Of the estimation options available, many platforms have opted for facial scanning. Meta, TikTok, Spotify, Sony PlayStation, and adult sites xHamster and OnlyFans outsource facial age estimation to third-party age-checker Yoti, while Discord and Snapchat have users upload video selfies to k-ID. Depending on jurisdiction, Reddit outsources facial estimation—as well as documentary verification—to Persona or estimates user age using email address and platform use data; X plans to estimate new users’ ages from video selfies using its native AI, Grok, as well as through phone and email estimation.

Other services have leaned even more heavily toward content analysis. In addition to its use of Yoti, Meta has announced an initiative to review current users’ ages through AI-powered assessments of posted content. Under this approach, a comment such as “happy 15th birthday!!” could inform the decision to switch a current user to a teen account. Google is following a similar path with AI analysis of users’ search history on YouTube and other Google products. OpenAI aims to estimate ChatGPT users’ ages based on how they use the chatbot.

Many proponents frame age estimation as a safer middle ground between the more identity-revealing process of age verification and the easy-to-circumvent process of age-gating. And age estimation based on use patterns—for example, the conclusion that a user who asks an AI chatbot mostly about trigonometry and “The Great Gatsby” is probably under 18 and should be limited to minor-appropriate conversations—raises relatively few concerns by itself. 

But age estimation tools are susceptible to errors, including mis-estimating adults as minors. When that happens, adults often have to appeal their estimated category—and the process of “appealingfrequently involves age verification or facial age estimation. For example, when OpenAI’s behavior-based estimation tools erroneously indicate that a user is a minor, the user will be able to correct the mistake by uploading a selfie or official government ID to third-party verifier Persona, while adults on YouTube will need to upload an ID, credit card, or a selfie to view adult-restricted content. It is worth noting that, in the Discord third-party verification breach mentioned earlier, the IDs accessed were originally uploaded to “appeal” an initial age misclassification.

Age estimation mandates can also push users toward more dangerous corners of the internet. That is because—at least right now—a lot of people find the idea of scanning one’s face for admission to sites creepy. That feeling could have real-world effects: Pornography conglomerate Aylo regularly warns legislators that if sites like Pornhub (which Aylo owns) implement user age verification or face-based age estimation, many users will either purchase virtual protection networks—which allow users to circumvent age assurance requirements by routing their searches through jurisdictions where no age assurance laws apply—or simply migrate to sites that do not comply with estimation requirements. Aylo argues that such noncompliant sites are less scrupulous than Aylo about complying with other critical laws, including verification of porn actors’ age and consent to sex.

Aylo, of course, has a strong financial motivation to argue against checking user age, because (a) a sizable portion of their viewership is probably under 18, and (b) compliance with assurance requirements is expensive, as will be discussed in the final portion of this piece. But so far, it seems that laws that include ramped-up age assurance truly can push internet users to seek out seedier, more dangerous sectors of the online world. According to a report by digital services information site Comparitech, searches for torrenting services, fake IDs, and access to the dark web within the U.K. all shot up after the OSA came into effect. And pornography sites that specifically did not comply with OSA-mandated face-scanning age assurance experienced a surge in traffic

There is also evidence for the link between age assurance laws and ramped-up VPN usage. After the U.K. OSA took effect, VPN provider NordVPN reported a 1,000 percent spike in VPN purchases while ProtonVPN reported a 1,800 percent spike in downloads. In the first few hours after Florida’s porn-focused age assurance law took effect—and Pornhub subsequently exited the state—VPN demand in Florida rose by 1,150 percent.

Inducing large numbers of people to suddenly seek out VPNs is a recipe for more data privacy problems, because while high-quality VPNs exist—and are in fact essential to online commerce—many free VPNs are malicious scams that embed tracking features while selling significant amounts of their users’ data. 

The fact that some estimation services send data collected during facial estimation for processing in jurisdictions governed by different data privacy laws could also add to the user-dissuading creepiness of facial age estimation.

Robust Age Assurance Could Reshape the Digital World

Beyond user privacy and user anonymity, requiring robust age assurance poses more structural threats to the internet. Conducting robust age assurance in-house requires skilled technical staff; outsourcing assurance costs money (by some reports, 65 cents per verification, which adds up fast for platforms with millions of users); and failing to enforce age restrictions carries steep financial penalties in many jurisdictions.

These costs are hardest to bear for small and new digital services, but even large and established players facing onerous age assurance requirements can determine that exiting a jurisdiction altogether is the least costly response to age assurance legislation. It may not shock the conscience that Pornhub ceased operation in 23 states that imposed age assurance requirements, but the exit of left-leaning social media platform Bluesky from Mississippi following the state’s adoption of a strict social media age assurance law should raise more red flags.

The exit of Bluesky highlights how the power to inflict onerous age assurance requirements on types of digital platforms lawmakers disfavor—for example, because certain services are perceived as left-wing—can make the protection of children a fig leaf for attempted censorship. And even without censorious intent, legislators’ imposition of financial, technical, and legal age assurance burdens will likely favor established players over newcomers in digital services subject to age assurance, altering the competitive landscape of the digital world.

Another way age assurance laws could change the internet is by prompting overzealous legislators to try to restrict VPNs, which allow users to evade age assurance requirements. Such legislation has already been proposed: Wisconsin’s AB 105 and Michigan’s now-stalled Anticorruption of Public Morals Act contain provisions that would require pornography sites to block all known VPN users entirely. Legislators in the U.K. have proposed logging sites that VPN users visit and requiring VPNs themselves to conduct age assurance.

Because it is impossible to determine where a VPN user is actually from, even legislation requiring sites to just block VPN users from a given state could wind up inducing liability-fearful sites to try to block all VPN users everywhere or to just leave jurisdictions entirely. More broadly, impeding VPN access—a favorite project of authoritarian regimes the world over—would have devastating consequences for businesses, governments, schools, universities, human rights activists, and all other industries and entities that rely on encrypted data transmission, as well as individuals with basic concerns about digital privacy.

*          *          *

The popularity of robust digital age assurance laws reflects a widespread, rightly intentioned desire to make the digital world a safer place for kids to learn and grow. It is easy to understand some frustrated lawmakers’ and parents’ urge to compare digital age assurance to the sort of momentary, in-person age verification that might happen at a bar or casino. But laws requiring age verification or age estimation—particularly if accompanied by ill-conceived attacks on VPNs—carry far greater risks to privacy, information access, digital innovation, and contemporary democratic participation than would ever be posed by physical age verification. In promoting kids’ safety across the digital world, legislators need to act thoughtfully, unhurriedly, and with an informed sense of the democratic consequences that follow from online age assurance requirements.


Isabel Arroyo interned at Lawfare in winter 2025. She holds a B.A. in global affairs from Yale University.
}

Subscribe to Lawfare