British Technology Firms and Child Safety Officials to Examine AI's Capability to Generate Exploitation Images
Tech firms and child safety agencies will be granted authority to assess whether artificial intelligence systems can generate child exploitation material under new UK legislation.
Significant Rise in AI-Generated Illegal Material
The declaration came as revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will allow approved AI developers and child safety organizations to examine AI models – the foundational technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from producing images of child exploitation.
"Fundamentally about preventing exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict conditions, can now identify the danger in AI systems early."
Tackling Regulatory Obstacles
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that issue by helping to stop the production of those materials at source.
Legal Structure
The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI models designed to create child sexual abuse material.
Practical Consequences
This recently, the official toured the London headquarters of a children's helpline and listened to a simulated call to counsellors featuring a report of AI-based abuse. The interaction portrayed a teenager requesting help after facing extortion using a sexualised deepfake of themselves, constructed using AI.
"When I hear about children experiencing extortion online, it is a source of intense frustration in me and rightful anger amongst families," he said.
Alarming Statistics
A prominent internet monitoring organization reported that instances of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to ensure AI tools are secure before they are released," stated the chief executive of the online safety foundation.
"AI tools have enabled so victims can be targeted all over again with just a few clicks, giving offenders the ability to make potentially endless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which further commodifies survivors' trauma, and makes young people, particularly girls, more vulnerable both online and offline."
Counseling Interaction Information
The children's helpline also published information of support interactions where AI has been referenced. AI-related harms discussed in the sessions comprise:
- Employing AI to evaluate weight, body and looks
- AI assistants discouraging children from consulting trusted adults about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing utilizing chatbots for support and AI therapeutic applications.