Cybersecurity & Tech

Are Existing Consumer Protections Enough for AI?

J. Scott Babwah Brennen, Kevin Frazier, Anna Vinals Musquera
Wednesday, September 3, 2025, 1:00 PM

An initial effort to map the protection landscape

Graphic showing lit up brain
Machine Learning & Artificial Intelligence (Mike MacKenzie, https://www.flickr.com/photos/mikemacmarketing/42271822770, CC BY 2.0), https://creativecommons.org/licenses/by/2.0/

Published by The Lawfare Institute
in Cooperation With
Brookings

On July 23, the Trump administration released its AI Action Plan, laying out a comprehensive approach to artificial intelligence (AI) development and regulation. To support American innovation and global leadership on AI, the plan calls for rolling back regulations on AI companies. It also tasks the Federal Communication Commission with assessing if existing state regulation interferes “with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.” The administration has expressed concern that regulation, and especially a patchwork of state AI regulation, may impair U.S. innovation and competitiveness, an argument that also animated support for the moratorium on state AI regulation that was initially included in the “big beautiful bill.”

Supporters of rolling back enacted AI regulation have also argued that existing state and federal consumer protection laws already cover many—or at least the most worrisome—AI risks. American consumers are protected by a complex web of state and federal laws, regulations, and court precedents that vary across states, making it no easy task to sort out what is or is not covered.

Understanding whether existing laws adequately protect consumers from AI-related harms requires systematic examination of how traditional legal frameworks interact with algorithmic systems across critical sectors. This piece focuses on five domains where AI integration has advanced rapidly and where AI may have large impacts on consumers: housing, employment, financial services, insurance, and the information environment.

Across these domains, we examine existing legal protections, identify specific algorithmic risks to consumers, and assess whether current frameworks provide adequate safeguards. The authors hope this will encourage others to complete a census of whether existing laws indeed shield consumers from real and perceived AI risks. This is a time-sensitive exercise. While states continue to pass new AI regulation, Congress seems poised to once again explore the merits of a federal moratorium on such laws.

The goal of this exercise isn’t to advocate for particular policy outcomes but to inform AI policy debate with analysis about where consumers stand today. This represents an initial effort to map the protection landscape, with the expectation that ongoing technological and regulatory developments will require continuous reassessment.

Housing

The integration of AI into the housing sector presents several possible concerns for consumers. Three of the most frequently discussed concerns are AI-powered tenant screening, algorithmic lending decisions, and algorithmic pricing to dictate rental prices.

Tenant Screening

The federal Fair Housing Act (FHA) prohibits landlords from relying on any tool, including AI, to discriminate against potential tenants on the basis of several protected classes. As specified in a guidance document issued by the Department of Housing and Urban Development (HUD) in May 2024 pursuant to President Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the FHA covers both intentional discrimination as well as the sort of disparate impact that may result from biased AI tools.

Moreover, previous HUD regulation clarified that the FHA should be interpreted to prevent “discriminatory effect … even if the practice was not motivated by a discriminatory intent,” a position that was further supported by the Supreme Court in Texas Department of Housing and Community Affairs v. Inclusive Communities Project.

Two recent executive orders issued by President Trump potentially address unintentional discrimination by AI. In one executive order, President Trump directed his administration to rescind agency actions taken pursuant to Executive Order 14110, and another signaled the administration’s interest in removing disparate impact analysis from federal regulations. Yet, even rescinding HUD guidance on disparate impact discrimination may have little impact given existing Supreme Court precedent, such as Texas Department of Housing and Community Affairs v. The Inclusive Communities Project.

Some state and local laws extend the protections in the FHA. For example, California’s Fair Employment and Housing Act (FEHA) removes the FHA’s exemption for “owner-occupied buildings with no more than four units.” Similarly, the Illinois Human Rights Act adds a series of protected classes, including age, marital status, and sexual orientation, going beyond the classes protected by the FHA.

The adequacy of existing laws related to algorithmic lending decisions is generally discussed below in the section on financial lending.

Algorithmic or Dynamic Pricing

The potential harms of algorithmically determined dynamic pricing are quite severe given the ongoing housing crisis in many American communities. Algorithmic pricing tools, widely adopted by landlords and property management companies, analyze competitor data and adjust rental rates dynamically. While ostensibly designed to optimize pricing for each landlord, these tools can theoretically (and empirically do, depending on who you ask) result in an uptick in rents across an entire rental market that would not have occurred absent a large number of local landlords adhering to the algorithm’s recommendations. According to plaintiffs in several suits contesting the use of these tools, broad use of algorithmic pricing has resulted in substantial financial strain for renters, with estimates from the Biden administration suggesting that coordinated rents across landlords from algorithmic pricing cost renters in algorithm-utilizing units an average of $70 more a month nationally, totaling approximately $3.8 billion in 2023.

The primary legal framework for addressing price fixing and collusion is antitrust law, specifically Section 1 of the Sherman Antitrust Act. These laws are designed to promote fair competition and prevent monopolistic behavior. The Department of Justice has scrutinized these practices, filing lawsuits against technology companies like RealPage for alleged monopolization of the market for apartment pricing software and for decreasing competition among landlords.

The application of existing antitrust laws to algorithmic pricing demonstrates that these laws can be leveraged to address the harm. The DOJ’s lawsuits, as well as those filed by state attorneys general and private litigants, indicate a legal basis for challenging these practices. However, the legal standards for proving collusion via algorithms are still developing. Traditional antitrust frameworks often require proving an “agreement” or “concerted action,” which becomes challenging when algorithms, rather than explicit human communication, facilitate price coordination.

In response to the perceived gaps in applying traditional antitrust law to algorithmic coordination, a surge of algorithmic pricing laws is emerging at the state and local levels. Cities such as San Francisco, Philadelphia, and Jersey City have enacted ordinances specifically prohibiting or restricting the sale or use of revenue management software that relies on non-public competitor data to set rents.

Employment

As human resources (HR) departments across the country look into new AI tools to assist with employment decisions, America’s laborers are right to ask whether existing laws will apply if these tools contribute to an arbitrary, discriminatory, or otherwise illegal action. Here again, this field introduces a slew of issues that require additional analysis to test the claim related to the adequacy of existing laws. Among the thornier issues is whether employees in states with comprehensive privacy laws can fully exercise specific rights, such as a right to access, correct, and delete data HR departments collect to use with AI tools. We will set that topic aside for now and instead zero in on potential algorithmic bias given that this concern nearly tops the list of American anxieties about the use of AI in employment settings, according to a 2024 Gallup poll.

Algorithmic Bias and Discrimination

AI tools used in recruitment, hiring, promotion, and termination can perpetuate and amplify existing human biases. This occurs when AI systems are trained on historical data that reflects past discriminatory practices (e.g., male-dominated tech roles, lower credit scores among specific communities, or even the absence of certain groups in past successful hires). AI may then inadvertently favor certain demographics (e.g., white, male, specific accents, Ivy League education) even without explicit programming to do so. This problem is exacerbated by the speed and scale at which AI operates and the frequency with which employers adopt and deploy new tools.

A number of federal laws deal with exactly this sort of discrimination. For example, Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, national origin, religion, and sex. It applies to AI-driven decisions, covering both intentional discrimination and disparate impact. A recent executive order by President Trump directed the executive branch to “eliminate the use of disparate-impact liability in all contexts to the maximum degree possible” without running afoul of the law. While the order signals the administration’s interest in eventually removing disparate impact liability from federal regulation, this order does not change the law itself nor Supreme Court precedent interpreting that law.

Additionally, the Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities. AI systems that screen out candidates based on disability-related characteristics can violate the ADA, and employers are responsible for ensuring reasonable accommodations when utilizing AI. 

Finally, the Age Discrimination in Employment Act protects individuals aged 40 and older from employment discrimination. There is no carve out for age discrimination resulting from AI.

Other state and local laws, such as the Colorado Anti-Discrimination Act, may also apply when AI is used to make employment decisions, though many appear to duplicate federal protections or address very narrow consumer harms.

Insurance

Like medicine and finance, insurance is a heavily regulated industry. While there are important differences in regulations across states, thanks to 150 years of effort from the National Association of Insurance Commissioners (NAIC), state laws are broadly consistent. Both NAIC as well as state legislators are now debating new regulations aimed to address the growing role that AI is playing in the insurance industry.

AI models are rapidly being integrated into the insurance industry. In addition to more common uses, there are two unique ways that insurers are integrating AI models: to personalize pricing and coverage options for consumers and to make decisions on specific claims. AI models used in this way raise at least four broad types of potential harms to consumers: inaccuracy, algorithmic discrimination, transparency, and privacy and personalization. 

Inaccuracy 

Neither federal nor state law requires that insurers test AI models for the accuracy of their decisions in either pricing or claims review or to release results of accuracy assessments. While the recently enacted Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems requires developers of a range of AI models to test for algorithmic discrimination, it does not require developers to test for accuracy.

However, following a NAIC model, many states require that insurers issue claim denials with an explanation, permitting some insight into how the insurer reached the adverse decision. However, there is no requirement that insurers reveal the full scope of information or data that informed each decision. Many states also require that licensed human medical experts review all medical insurance claim denials. 

Existing federal law and statutes in many states also require that insurers have some sort of appeal process for claim denials—although not for pricing determinations—and must inform customers of how to initiate the appeal process. The Affordable Care Act requires that most medical insurers have both internal (by human personnel) and external (by outside experts) appeal processes.

For all types of insurance, states generally require that humans conduct reviews of consumer appeals. But while licensed medical professionals generally must review appeals in health insurance utilization, they do not need to be experts in other types of reviews.

Algorithmic Discrimination

Insurers have always charged different premiums depending on the different risks an individual presents. Yet state and federal laws prohibit some forms of intentional “unfair discrimination” in both pricing policies and deciding claims. A collection of federal laws prohibit some forms of unfair discrimination in different types of insurance, including homeowners, renters, and medical insurance. These laws have two aims, first to ensure that two people with the same risk profile are treated the same. That is, insurers must justify differential treatment according to objective evidence of differential risk. Second, current state law prohibits the use of certain protected characteristics to make insurance determinations. While all states have adopted a baseline prohibition of discrimination based on race, religion, and national origin, there is some variation in the other characteristics that are protected. Some states include gender identity, sex, sexual orientation, genetic information, health status, disability, marital status, age, or credit scores.

There is little reason to think that prohibitions on “unfair discrimination” should depend on the technology used, and so likely cover AI systems. However, there are two ways in which AI systems may not be covered. First, while insurers cannot directly discriminate based on protected characteristics, most states do not prohibit proxy discrimination, where insurers use other data that is correlated to protected characteristics. For example, while law prohibits an insurer from discriminating based on religion, an AI system could use other data that accurately predicts religion to discriminate. Second, most laws generally prohibit only intentional discrimination of protected characteristics. Yet it is often not possible to know exactly how AI models make decisions. It is possible that existing law does not cover this type of algorithmic discrimination when the insurer does not explicitly intend to discriminate.

There have been some efforts to address these types of discrimination by prohibiting discrimination on the basis of “disparate impact.” The ACA explicitly prohibits proxy discrimination in utilization review and discrimination through disparate impact for covered health insurers.

A handful of states have also enacted laws or regulations that address proxy or unintentional discrimination. A law passed in Colorado in 2021 also prohibits proxy discrimination in all insurers. While insurance regulators in California, Connecticut, and New York have issued regulatory guidance prohibiting proxy discrimination, it remains legal in much of the country.

As noted above, President Trump’s recent executive order, “Restoring Equality of Opportunity and Meritocracy,” attempts to limit disparate-impact liability. The executive order impacts neither the ACA nor state laws; however, it may impact litigation strategy or further policy guidance.

Transparency

The Gramm-Leach-Bliley Act (GLBA) requires all insurers to disclose to consumers which personal information they collect and whether and how they share it. A collection of other federal laws, including the ACA, the Health Insurance Portability and Accountability Act (HIPPA), and the Genetic Information Nondiscrimination Act, require certain insurers to provide additional information to consumers. Many states add to these federal requirements, requiring insurance companies to disclose a range of additional information to consumers. However, no state currently requires that insurers disclose to consumers when an AI system was involved in either setting policy costs or reviewing claims. Since 2021, Colorado requires companies to disclose certain information regarding algorithmic decision-making to regulators, but not to consumers. Similarly, about half the states have adopted NAIC’s Model Bulletin on the Use of AI Systems by Insurers, which does require disclosure of the use of AI to regulators, but not to consumers. In 2025, Utah enacted a law that requires regulated industries to include disclosures on generative AI content seen by consumers.

Similarly, no state currently requires that either model developers or deployers disclose information about model training data.

Privacy and Personalization

A mix of state and federal laws already hold insurers to higher privacy standards than other, less-regulated industries. Health and life insurers must also comply with HIPPA, which requires consumers to give affirmative consent before insurers share medical data with others. More broadly, the Federal Gramm-Leach-Bliley Act establishes privacy protections for consumer financial data and also includes insurers.

The GLBA requires insurance companies to explain to consumers what data they collect and share, and to give consumers the option to opt out of sharing with third parties. Many states have adopted laws that more or less reflect existing GLBA requirements.

That being said, required annual notices must include only the categories of data companies collect and the categories of third parties with which they share it. Current law does not require disclosures about the specific data collected, how specific data are used in pricing or claim review, or the specific companies with which they share data. But GLBA, unlike HIPPA, does not require affirmative consent or opt-ins before insurers share data with third parties. 

Over the past few years, more than 20 states have passed comprehensive data privacy laws. These laws cover insurers and generally impose a set of new restrictions and requirements on data controllers and processors. Specifically, these laws give consumers the right to correct, view, or delete personal data held by companies. However, they do not require disclosures regarding the use of AI models.

Financial Lending

AI is increasingly embedded in mortgage underwriting, credit card approvals, and loan decisions. This trend raises urgent concerns about bias and opacity in lending. Algorithmic bias can lead to disparate outcomes—for example, AI-driven credit scoring might inadvertently deny or overcharge certain groups due to correlations with protected characteristics. At the same time, the opacity of “black box” models means consumers have no transparency into how exactly decisions are being made. Consumers often struggle to obtain clear explanations for negative decisions, which complicates their ability to contest denials and can mask discrimination. These harms are not hypothetical: Seemingly neutral data (from one’s email domain to the time of day they shop) can skew lending algorithms and reinforce inequities.

Lenders’ use of AI is currently governed by long-standing anti-discrimination and consumer protection laws. The Equal Credit Opportunity Act (ECOA) and Fair Housing Act prohibit discriminatory credit decisions based on protected characteristics, and this applies to decisions made by either humans or algorithms. The Fair Credit Reporting Act also requires accuracy and transparency in credit evaluations (such as the use of credit scores and contents of adverse action notices), which are applicable to automated credit decision tools just as they are to traditional methods. Regulators have made it clear that these laws fully apply to algorithmic decision-making, so AI is not a loophole to avoid compliance.

Federal agencies have reinforced these requirements in the AI context. For example, a 2023 Consumer Financial Protection Bureau guidance emphasized that creditors must provide specific reasons for credit denials, warning that there is “no special exemption for artificial intelligence” in meeting ECOA’s notice requirements. Likewise, the Department of Justice underscored in a 2023 court filing that FHA protections extend to algorithmic tenant screening tools, affirming that automated systems cannot be used to circumvent fair housing laws. In short, AI tools are held to the same standards under federal law as any human decision-maker when it comes to fair lending and consumer protection.

In the absence of a comprehensive federal AI law, state lawmakers and enforcers have also stepped in to fill the gaps with their own measures aimed at bias mitigation, transparency, and accountability in automated decision-making. Some states, such as California, Oregon, New Jersey, and Massachusetts, have even clarified that biased outcomes from AI models can be prosecuted under unfair or deceptive acts or practices (UDAP) statutes, effectively using general consumer protection authority to police discriminatory algorithms.

Even in states that have not issued explicit statements about AI in financial lending, general UDAP and financial consumer protection laws still likely apply to AI-driven misconduct—so long as the harm resembles a traditional consumer violation like deception, fraud, or discrimination. For example, statutes in states such as Arizona, North Carolina, or Minnesota use broad, technology-neutral language to prohibit “any deceptive or unfair method, act or practice in the conduct of trade or commerce,” which plausibly includes algorithmic lending decisions. However, these UDAP laws are typically reactive—they do not impose affirmative obligations such as algorithmic audits, dataset disclosures, or impact assessments. In other words, unless the use of AI rises to the level of causing consumer harm that is recognizable under existing statutory definitions, there may be no enforcement until after harm occurs. That’s the key limitation: These general laws might cover AI harms retroactively, but they don’t require upfront risk mitigation. That is why new AI-specific laws have emerged—to fill the regulatory gap by mandating proactive transparency, bias prevention, and consumer disclosures not otherwise required under baseline UDAP statutes.

A growing patchwork of state laws now directly addresses AI in lending and credit services. California’s forthcoming AI Transparency Act (AB 2013) will soon require developers of generative AI to publicly disclose details about the datasets used to train and test their models, an attempt to shed light on the “black box” that could influence credit decisions. The comprehensive Colorado law also obliges AI developers to document how their systems were evaluated for performance and bias mitigation, aiming to reduce the risk of hidden discrimination. Illinois has similarly updated its consumer protection laws to keep algorithms in check. A 2024 Illinois law now bars the use of predictive analytics that assign risk factors to a borrower’s race or ZIP code in credit scoring, making it an unlawful practice under the state’s UDAP statute. These are just a few examples—even beyond lending, states are targeting algorithmic bias in other domains as well. New York City, for example, now mandates independent bias audits of automated hiring tools, and Utah adopted a law requiring businesses to clearly disclose when consumers are interacting with an AI system (such as a chatbot) rather than a human. Meanwhile, Texas launched an initiative in 2024 dedicated to police AI risks under its consumer protection law, focusing on preventing the misuse of Texans’ personal data by AI and tech firms. Across the country, state attorneys general are leveraging their broad enforcement powers (under UDAP and civil rights laws) to monitor AI-driven financial products. They are increasingly collaborating with state financial regulators to scrutinize AI credit scoring models and fintech lending practices, ensuring these technologies don’t produce unlawful bias or deception. 

Information Environment 

Over the past 10 years, policymakers, scholars, and activists have devoted a great deal of effort to understanding and combating misinformation. More recently, amid advancement of generative AI, many observers have expressed concern that generative AI models will “supercharge” misinformation, especially in elections. While some continue to argue that those fears are likely overstated, generative AI does alter the risk landscape. 

Broadly speaking, AI doesn’t necessarily change the types of risks posed by misinformation but, rather, potentially increases the amount, quality, or persuasiveness of false content. As such, a very broad set of both direct and indirect harms have been linked to misinformation, ranging from persuasion and fraud, to polarization and loss of social trust

False or misleading speech is generally protected by the First Amendment. This constitutional constraint means that the use of AI to create, disseminate, and receive information likely also enjoys broad First Amendment protections. Only narrow exceptions—such as fraud, defamation, incitement, true threats, or other unlawful conduct—permit federal or state laws to limit or criminalize specific forms of misinformation. Therefore, any regulation targeting AI‑generated misinformation must be carefully tailored to fit within these exceptions; even a carefully drawn rule may still fail judicial scrutiny if it burdens too much protected speech. 

Given this, federal and state laws addressing misinformation are generally limited to a few narrow areas: First, fraud remains illegal, irrespective of whether it was perpetrated using AI. Notably, a small number of proposed state bills would further criminalize the use of AI to commit certain crimes, such as fraud. None of these bills have passed so far, and they would serve as an additional charge prosecutors can apply in criminal proceedings. 

Second, over the past several years, more than a dozen states have enacted laws criminalizing the production and distribution of AI-generated nonconsensual intimate imagery (NCII). Earlier this year, the federal government enacted the Take It Down Act, which also establishes a federal prohibition on such content. This bipartisan law, signed in May 2025, makes it a crime to knowingly publish intimate visual depictions or deepfakes of a person without consent, if done to cause harm. It’s the first major federal law specifically addressing AI-induced harm, and it requires online platforms to offer a takedown mechanism for such content. 

Third, more than 20 states have enacted laws that limit or restrict the use of AI to create false or misleading election content. As of mid‑2025, 26 states have enacted laws aimed at curbing AI‑generated deepfakes in election communications. The majority of these laws require labels on certain false AI-generated political content. At least two laws outright prohibit such content; however, a federal judge blocked California’s ban on election deepfakes in 2024, finding that it “unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.” 

Finally, while misinformation remains legal—even in elections—many states have narrow laws that prohibit falsehoods about the time, manner, or place of elections. Designed to prevent voter suppression and election interference, these statutes impose criminal penalties for knowingly spreading these types of false voting information. As of mid-2025, about a dozen states ban such election-related disinformation, and there is no clear reason they would not extend to AI-generated content. For example, a fabricated notice with an incorrect voting date could plausibly fall under their prohibitions.

Summing Up

Some of the most extreme risks of AI, such as the possibility of bad actors using AI to devise chemical, biological, radiological, and nuclear weapons, are beyond the scope of consumer protection.Yet concerns around the impact of AI on housing, employment, finance, insurance, and the information environment present some of the most immediate harms from AI and are the subject of a litany of state and federal laws and regulations.

Though federal law provides some general protections, consumer protection has long been a state-led effort. Accordingly, there are significant differences across states, especially thanks to state case law. The resulting patchwork means that sorting out AI’s place within state consumer protection laws will always be a messy affair. Consumers already enjoy a wide range of protections across sectors that—at least partially—cover AI systems. Yet a few ambiguities and gaps remain.

In the past few years, some state attorneys general have made efforts to clarify the scope of existing law, establishing the degree to which current law applies to AI. These efforts are an important first step for states in addressing AI risks. Yet, in the meantime, it is essential that we continue to map out exactly how current law does and does not apply to new AI tools and systems.


J. Scott Babwah Brennen is the director of the Center on Technology Policy at NYU.
Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Anna is an M.S. candidate in Global Security, Conflict, and Cybercrime at New York University and a Graduate Research Assistant at NYU’s Center on Tech Policy. She has more than six years of experience as an attorney specializing in regulatory, privacy, and cybersecurity law, and her work focuses on the intersection of AI, law, and emerging regulation.
}

Subscribe to Lawfare