British Technology Companies and Child Protection Officials to Test AI's Capability to Generate Abuse Images
Technology companies and child safety organizations will receive authority to assess whether artificial intelligence systems can produce child exploitation images under recently introduced UK laws.
Significant Increase in AI-Generated Harmful Content
The announcement came as findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will allow approved AI companies and child safety organizations to inspect AI models – the foundational systems for chatbots and visual AI tools – and ensure they have adequate protective measures to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the risk in AI systems promptly."
Tackling Legal Obstacles
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that issue by helping to stop the production of those materials at their origin.
Legislative Framework
The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or sharing AI systems developed to create exploitative content.
Real-World Consequences
This recently, the official toured the London headquarters of a children's helpline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a sexualised deepfake of themselves, created using AI.
"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Statistics
A prominent online safety organization reported that instances of AI-generated abuse content – such as online pages that may include multiple images – had more than doubled so far this year.
Cases of category A material – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to guarantee AI products are safe before they are launched," commented the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, giving criminals the capability to make possibly endless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies survivors' trauma, and makes young people, particularly female children, less safe both online and offline."
Counseling Session Data
Childline also published information of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
- Using AI to evaluate body size, body and looks
- AI assistants discouraging young people from consulting trusted guardians about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including utilizing AI assistants for assistance and AI therapeutic apps.