British Tech Companies and Child Protection Officials to Examine AI's Ability to Generate Abuse Content

Tech firms and child protection agencies will be granted authority to evaluate whether AI systems can produce child exploitation material under new UK legislation.

Substantial Increase in AI-Generated Harmful Material

The announcement coincided with findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow approved AI developers and child safety groups to inspect AI systems – the foundational technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from creating depictions of child exploitation.

"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems promptly."

Addressing Regulatory Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.

This legislation is aimed at preventing that problem by enabling to halt the production of those images at their origin.

Legal Framework

The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on owning, producing or sharing AI models designed to generate child sexual abuse material.

Practical Consequences

This week, the official toured the London headquarters of Childline and listened to a simulated conversation to counsellors involving a account of AI-based abuse. The call portrayed a teenager requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.

"When I learn about young people facing blackmail online, it is a cause of intense frustration in me and rightful anger amongst parents," he stated.

Alarming Statistics

A prominent internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may contain numerous images – had significantly increased so far this year.

Instances of the most severe material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "constitute a crucial step to guarantee AI products are secure before they are released," stated the chief executive of the internet monitoring foundation.

"AI tools have made it so victims can be targeted all over again with just a simple actions, giving criminals the ability to make potentially limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which additionally exploits survivors' suffering, and renders young people, particularly girls, more vulnerable both online and offline."

Counseling Session Information

The children's helpline also published details of support sessions where AI has been mentioned. AI-related risks discussed in the conversations comprise:

  • Employing AI to evaluate body size, body and appearance
  • Chatbots dissuading children from talking to safe adults about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated pictures

Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for support and AI therapy apps.

Derek Mccann
Derek Mccann

A seasoned gaming analyst with over a decade of experience in casino industry trends and player behavior.