New Law Proposed to Combat AI-Generated Child Sexual Abuse Material
Table of Contents
Protecting children online is set to become a priority with proposed legislation granting increased powers to organizations tackling the rapidly growing threat of AI-generated child sexual abuse material (CSAM).The new law aims to proactively address the issue by allowing designated bodies, including AI developers and child protection organizations like the Internet Watch Foundation (IWF), to test AI models for their potential to create exploitative content before it is indeed released to the public.
The proposed changes, slated to be introduced today as an amendment to the Crime and Policing Bill, represent a important shift in strategy. Currently, efforts largely focus on removing illegal content after it appears online. This new approach, according to a senior official at the IWF, will enable authorities to “tackle the problem at the source.” The IWF currently removes hundreds of thousands of illegal images each year,a task that is becoming increasingly challenging with the advent of sophisticated AI technology.
Proactive Testing and Expanded Oversight
Under the proposed legislation, designated bodies will be able to assess the capabilities of AI models without fear of legal repercussions.This testing will not be limited to CSAM; the rules will also extend to preventing the creation of extreme pornography and non-consensual intimate images. The government has committed to assembling a panel of experts to ensure all testing is conducted “safely and securely.”
“These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk,” stated technology Secretary Liz Kendall. “By empowering trusted organizations to scrutinize their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”
dramatic Rise in AI-Generated Abuse
The urgency behind this legislation is underscored by alarming new data released by the IWF. Reports of AI-generated CSAM have more than doubled in the past year, and the severity of the material is escalating. The most concerning category, designated as “Category A” – encompassing images depicting penetrative sexual activity, bestiality, or sadism – has risen from 2,621 to 3,086 items. This now accounts for 56% of all illegal material identified,a significant increase from 41% last year.
The data also reveals a disturbing trend: girls are overwhelmingly the victims in these AI-generated images, representing 94% of all illegal AI images identified in 2025.
https://www.youtube.com/watch?v=YOUR_YOUTUBE_VIDEO_ID – A recent report details the sentencing of an individual involved in creating AI-generated abuse material.
Calls for Mandatory Safeguards
While the proposed law is being lauded as a “vital step” towards protecting children, some organizations are advocating for even stronger measures.The NSPCC is calling for mandatory testing for all AI companies, arguing that voluntary compliance is insufficient.
“It’s encouraging to see new legislation that pushes the AI industry to take greater obligation for scrutinizing their models and preventing the creation of child sexual abuse material on their platforms,” said Rani Govender,policy manager for child safety online at the charity. “But to make a real difference for children, this cannot be optional. Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.”
the debate highlights the complex challenge of balancing technological innovation with the paramount need to protect vulnerable children in an increasingly digital world. The coming months will be critical in determining whether this new legislation can effectively stem the tide of AI-generated CSAM and safeguard the future of online child safety.
