UK Technology Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Content
Tech firms and child protection organizations will be granted authority to assess whether artificial intelligence tools can generate child abuse material under recently introduced British laws.
Substantial Increase in AI-Generated Harmful Material
The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will allow designated AI companies and child safety groups to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to stop them from creating images of child exploitation.
"Ultimately about stopping exploitation before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now identify the danger in AI models promptly."
Addressing Regulatory Challenges
The amendments have been implemented because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that problem by helping to stop the production of those materials at their origin.
Legal Framework
The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI systems designed to generate exploitative content.
Practical Consequences
This recently, the official visited the London base of a children's helpline and listened to a simulated conversation to advisors featuring a report of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about children experiencing extortion online, it is a cause of extreme anger in me and justified anger amongst families," he said.
Alarming Statistics
A leading online safety foundation stated that cases of AI-generated abuse material – such as online pages that may include multiple images – had significantly increased so far this year.
Instances of the most severe material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to ensure AI tools are safe before they are launched," stated the head of the online safety foundation.
"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to create potentially endless amounts of advanced, lifelike child sexual abuse material," she added. "Material which further commodifies survivors' trauma, and makes children, particularly female children, more vulnerable both online and offline."
Counseling Interaction Data
The children's helpline also published information of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include:
- Using AI to rate weight, body and appearance
- AI assistants dissuading young people from talking to trusted adults about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated topics were discussed, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy apps.