UK Technology Firms and Child Safety Agencies to Examine AI's Capability to Generate Exploitation Content
Technology companies and child protection agencies will receive authority to evaluate whether artificial intelligence tools can produce child abuse images under recently introduced UK legislation.
Significant Rise in AI-Generated Illegal Content
The announcement came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the changes, the government will allow approved AI developers and child safety groups to inspect AI models β the underlying systems for chatbots and image generators β and verify they have adequate protective measures to stop them from creating images of child exploitation.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the danger in AI models promptly."
Addressing Regulatory Obstacles
The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at averting that problem by enabling to halt the production of those images at their origin.
Legal Framework
The changes are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI models developed to generate child sexual abuse material.
Practical Consequences
This recently, the minister toured the London headquarters of Childline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I learn about young people facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst families," he stated.
Alarming Data
A leading online safety organization stated that instances of AI-generated abuse material β such as webpages that may include multiple images β had significantly increased so far this year.
Instances of the most severe content β the most serious form of exploitation β increased from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to ensure AI products are safe before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally exploits victims' trauma, and renders young people, particularly girls, less safe on and off line."
Support Interaction Information
Childline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Using AI to rate weight, physique and appearance
- AI assistants dissuading young people from consulting trusted adults about abuse
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked pictures
During April and September this year, Childline delivered 367 support sessions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapy apps.