Published by The Lawfare Institute
in Cooperation With
Last year saw significant technical advances in generative machine learning (ML). When trained on sexually explicit imagery, ML models can generate new realistic-looking explicit content. ML models are now being used to create highly realistic child sex abuse material (CSAM). It will soon be feasible to generate images that are indistinguishable from photographic images of real children. Computer-generated CSAM made with generative ML (CG-CSAM) will have major implications for the U.S. legal regime governing child sex abuse imagery. This paper reviews current law, discusses CG-CSAM’s constitutional and policy implications, and suggests some potential responses.
The First Amendment does not protect abuse material produced using real children. Thus, CG-CSAM may be criminalized if either it depicts an actual, identifiable child, or its training data included actual abuse imagery. Otherwise, it is protected speech under existing Supreme Court precedent unless it is obscene. These distinctions will matter to prosecutors and courts, but less so to online platforms, which are required by law to report CSAM on their services. Platforms will report both CG-CSAM and CSAM depicting real children and let downstream authorities sort it out, adding to a reporting pipeline already overburdened by a high volume of reports.
Policymakers have an opportunity to ameliorate CG-CSAM’s effects by investing in technologies for authenticating image provenance and carefully crafting narrowly targeted new legislation. They should resist the easy temptation to propose unconstitutional responses and focus instead on measures that will meaningfully help society deal with this new variant on an old and thorny problem.
You can read the paper here or below: