UK Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Abuse Content

Technology companies and child protection agencies will receive permission to assess whether artificial intelligence systems can generate child exploitation material under new UK laws.

Significant Increase in AI-Generated Harmful Material

The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will allow designated AI developers and child safety groups to inspect AI models – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing images of child sexual abuse.

"Fundamentally about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly."

Addressing Legal Challenges

The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at averting that issue by helping to stop the production of those images at their origin.

Legal Structure

The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI models developed to generate child sexual abuse material.

Practical Consequences

This week, the official visited the London headquarters of Childline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and justified concern amongst parents," he said.

Alarming Statistics

A prominent internet monitoring foundation stated that instances of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.

Instances of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "represent a vital step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring organization.

"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing criminals the ability to make potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further exploits victims' suffering, and makes young people, particularly girls, less safe both online and offline."

Counseling Interaction Data

The children's helpline also published details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Employing AI to rate body size, physique and appearance
  • Chatbots dissuading children from talking to trusted adults about harm
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.

Kristen Burton
Kristen Burton

Elena is a seasoned luxury travel writer with a passion for uncovering exclusive destinations and sharing insider tips.