UK Technology Firms and Child Safety Agencies to Examine AI's Ability to Create Exploitation Content

Tech firms and child protection agencies will be granted permission to assess whether AI systems can generate child abuse material under new UK laws.

Significant Increase in AI-Generated Harmful Content

The declaration came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the amendments, the authorities will allow designated AI companies and child safety groups to inspect AI models – the foundational systems for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child exploitation.

"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the risk in AI systems promptly."

Tackling Legal Challenges

The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that problem by enabling to halt the creation of those materials at source.

Legal Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI systems designed to generate exploitative content.

Real-World Impact

This week, the official toured the London headquarters of Childline and heard a simulated conversation to advisors featuring a account of AI-based abuse. The interaction portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he said.

Concerning Data

A prominent online safety organization reported that cases of AI-generated exploitation material – such as webpages that may contain multiple images – had more than doubled so far this year.

Instances of category A material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to ensure AI tools are safe before they are released," stated the head of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing offenders the capability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which further exploits survivors' trauma, and renders young people, especially girls, less safe on and off line."

Counseling Session Information

Childline also published information of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots dissuading children from consulting trusted guardians about harm
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-faked images

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related topics were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including using chatbots for support and AI therapeutic apps.

Martha Wright
Martha Wright

A passionate gamer and writer with over a decade of experience in exploring virtual worlds and sharing loot-hunting secrets.