Technology companies and child protection agencies will receive permission to assess whether artificial intelligence systems can generate child exploitation material under new UK laws.
The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow designated AI developers and child safety groups to inspect AI models – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing images of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under rigorous conditions, can now identify the risk in AI systems promptly."
The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that issue by helping to stop the production of those images at their origin.
The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI models developed to generate child sexual abuse material.
This week, the official visited the London headquarters of Childline and heard a mock-up conversation to counsellors featuring a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and justified concern amongst parents," he said.
A prominent internet monitoring foundation stated that instances of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.
Instances of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
The law change could "represent a vital step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring organization.
"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing criminals the ability to make potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further exploits victims' suffering, and makes young people, particularly girls, less safe both online and offline."
The children's helpline also published details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:
During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.
Elena is a seasoned luxury travel writer with a passion for uncovering exclusive destinations and sharing insider tips.