The internet is facing an unprecedented crisis. A sharp spike in online child sexual abuse imagery has been recorded, with experts raising alarm over the disturbing rise in AI-generated content, sextortion, and the deliberate spread of intimate images.
According to a damning report by the Internet Watch Foundation (IWF), nearly 300,000 cases of child sexual abuse material (CSAM) were flagged in 2024 alone.
The IWF, a UK-based charity dedicated to removing online abuse content, described the findings as a “watershed moment” in the fight to keep children safe online.
Sharp Rise in AI-Generated Abuse and Sextortion
The report paints a bleak picture. Children, particularly girls, are being increasingly targeted in a new wave of digital exploitation. The material is not just being shared, it’s being created using cutting-edge AI tools, which can generate convincing, synthetic abuse imagery.
“Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire,” said Derek Ray-Hill, interim chief executive of the IWF.
“New threats like AI and sexually coerced extortion are only making things more dangerous.” The IWF noted that 97% of recorded incidents involving gender showed girls as the victims, a shocking increase from previous years.
To counter the crisis, the IWF has launched Image Intercept—a new tool designed to detect and block known CSAM from being uploaded to the web. Built with Home Office funding, the tool scans for images in the IWF’s existing criminal content database, which includes over 2.8 million digital hashes of abuse material.
This tool is now being offered free of charge to smaller platforms, many of which struggle to meet the standards required by the Online Safety Act—legislation that came into partial effect last month.
“Many well-intentioned and responsible platforms do not have the resources to protect their sites against people who deliberately upload child sexual abuse material,” Ray-Hill added. “That is why we have taken the initiative… to help these operators create safer online spaces.”
A Government-Backed Push for Online Safety
Technology Secretary Peter Kyle described his recent visit to the IWF as “one of the most shocking and moving days” in his role. He praised the IWF’s commitment, calling the new tool “a powerful example of how innovation can be part of the solution.”
Jess Phillips, Minister for Safeguarding and Violence Against Women and Girls, echoed the urgency, describing the IWF’s findings as “deeply disturbing.”
“Their Image Intercept initiative, funded by the Home Office, will be vital in helping to stop the re-victimisation of children who have been exploited online,” she said. “But we must also hold technology platforms accountable.”
The Online Safety Act demands stricter compliance from platforms, with regulators like Ofcom stepping in to set clear standards. It’s no longer enough to react—platforms are now legally bound to prevent harm before it happens.
The Government has also promised harsher penalties. New legal measures are in place to target anyone possessing AI tools designed to create illegal abuse material or manuals instructing others on how to do so.
“We will not hesitate to go further if necessary to keep our children safe,” Phillips stressed. The explosive growth of child sexual abuse material online signals an evolving and deeply concerning threat.
AI-driven abuse and sextortion are dragging the digital world into new, uncharted territory. But with tools like Image Intercept, backed by government support and new legislation, the UK is pushing back.
Online safety can no longer be a secondary concern. As the IWF puts it, “Together we can present a stone wall to those looking to spread child sexual abuse imagery.”