🔗 Share this article British Tech Companies and Child Safety Agencies to Test AI's Capability to Create Abuse Images Tech firms and child safety agencies will be granted permission to evaluate whether artificial intelligence systems can generate child abuse images under new UK laws. Substantial Increase in AI-Generated Illegal Material The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025. New Regulatory Framework Under the amendments, the authorities will permit designated AI developers and child safety organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse. "Fundamentally about stopping exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under strict protocols, can now identify the risk in AI models promptly." Addressing Legal Obstacles The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it. This legislation is aimed at averting that problem by helping to stop the production of those images at their origin. Legal Structure The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI systems developed to create exploitative content. Real-World Consequences This recently, the minister toured the London base of a children's helpline and heard a mock-up conversation to advisors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, constructed using AI. "When I learn about young people facing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he said. Concerning Statistics A leading online safety organization stated that cases of AI-generated exploitation material – such as online pages that may include numerous images – had more than doubled so far this year. Cases of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086. Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025 Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025 Industry Reaction The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are released," commented the chief executive of the internet monitoring organization. "AI tools have made it so survivors can be victimised repeatedly with just a few clicks, giving offenders the ability to create possibly endless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and renders children, especially girls, more vulnerable both online and offline." Support Session Data Childline also released information of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise: Employing AI to rate weight, physique and appearance Chatbots dissuading children from talking to safe guardians about harm Being bullied online with AI-generated material Digital blackmail using AI-manipulated pictures During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the same period last year. Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapeutic apps.