Online child sex abuse material, boosted by AI, is outpacing Big Tech's regulation

Watchdogs say its a "stark vision of the future."
By
Chase DiBenedetto
 on 
A warped face covered in static.
New reports document large numbers of real and AI child sex abuse materials shared online. Credit: dem10 / iStock / Getty Images Plus via Getty Images

Generative AI is exacerbating the problem of online child sexual abuse materials (CSAM), as watchdogs report a proliferation of deepfake content featuring real victims' imagery.

Published by the UK's Internet Watch Foundation (IWF), the report documents a significant increase in digitally altered or completely synthetic images featuring children in explicit scenarios, with one forum sharing 3,512 images and videos over a 30 day period. The majority were of young girls. Offenders were also documented sharing advice and even AI models fed by real images with each other.

"Without proper controls, generative AI tools provide a playground for online predators to realize their most perverse and sickening fantasies," wrote IWF CEO Susie Hargreaves OBE. "Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet."

According to the snapshot study, there has been 17 percent increase in online AI-altered CSAM since the fall of 2023, as well as a startling increase in materials showing extreme and explicit sex acts. Materials include adult pornography altered to show a child’s face, as well as existing child sexual abuse content digitally edited with another child's likeness on top.

"The report also underscores how fast the technology is improving in its ability to generate fully synthetic AI videos of CSAM," the IWF writes. "While these types of videos are not yet sophisticated enough to pass for real videos of child sexual abuse, analysts say this is the ‘worst’ that fully synthetic video will ever be. Advances in AI will soon render more lifelike videos in the same way that still images have become photo-realistic."

In a review of 12,000 new AI-generated images posted to a dark web forum over a one month period, 90 percent were realistic enough to be assessed under existing laws for real CSAM, according to IWF analysts.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Another UK watchdog report, published in the Guardian today, alleges that Apple is vastly underreporting the amount of child sexual abuse materials shared via its products, prompting concern over how the company will manage content made with generative AI. In it's investigation, the National Society for the Prevention of Cruelty to Children (NSPCC) compared official numbers published by Apple to numbers gathered through freedom of information requests.

While Apple made 267 worldwide reports of CSAM to the National Center for Missing and Exploited Children (NCMEC) in 2023, the NSPCC alleges that the company was implicated in 337 offenses of child abuse images in just England and Wales, alone — and those numbers were just for the period between April 2022 and March 2023.

Apple declined the Guardian's request for comment, pointing the publication to a previous company decision to not scan iCloud photo libraries for CSAM, in an effort to prioritize user security and privacy. Mashable reached out to Apple, as well, and will update this article if they respond.

Under U.S. law, U.S.-based tech companies are required to report cases of CSAM to the NCMEC. Google reported more than 1.47 million cases to the NCMEC in 2023. Facebook, in another example, removed 14.4 million pieces of content for child sexual exploitation between January and March of this year. Over the last five years, the company has also reported a significant decline in the number of posts reported for child nudity and abuse, but watchdogs remain wary.

Online child exploitation is notoriously hard to fight, with child predators frequently exploiting social media platforms, and their conduct loopholes, to continue engaging with minors online. Now with the added power of generative AI in the hands of bad actors, the battle is only intensifying.

Read more of Mashable's reporting on the effects of nonconsensual synthetic imagery:

If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You
Here's what DeepSeek AI does better than OpenAI's ChatGPT
DeepSeek and ChatGPT logos

Apple's child safety changes put more of the onus on app developers
A finger taps on the Apple App Store app on a phone screen.

Only a handful of sex toys are on sale during the Amazon Big Spring Sale
A flat lay image of various sex toys against a vibrant, abstract background of purple, pink, and blue splotches.

Report: Thousands of harmful AI chatbots threaten minor safety
A hand holds a phone in a dark room with blurred out text floating around it.

U.S. officials claim Signal chat contained 'no classified material,' so 'The Atlantic' releases more messages
The Signal encrypted messaging application is seen on a mobile device with the Department of Defence logo in the background.

More in Tech

U.S. company Firefly Aerospace just landed on the moon with its Blue Ghost spacecraft
Blue Ghost rendering


The Northern Lights will return on New Year's Eve. Here's how to view them.
The Northern Lights (Aurora Borealis), which emerge as a result of the interaction between the Earth's magnetic field and charged particles coming from the Sun, are seen in the Hatcher Pass region of Alaska, United States on November 9, 2024.

4 radio emissions Earth received from space in 2024
A radio telescope with stars in the background

Trending on Mashable
NYT Connections hints today: Clues, answers for April 17, 2025
Connections game on a smartphone

NYT Strands hints, answers for April 17
A game being played on a smartphone.

Wordle today: Answer, hints for April 17, 2025
Wordle game on a smartphone

Lego is giving away Grogu models for free to celebrate Star Wars Day. Here’s how to get yours.
Lego Bricks in child's hands

What's new to streaming this week? (April 18, 2025)
Composite of images from new to streaming titles.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!