Neszed-Mobile-header-logo
Thursday, October 9, 2025
Newszed-Header-Logo
HomeGadgetsFake image, real suffering: the rise of AI-generated child abuse

Fake image, real suffering: the rise of AI-generated child abuse

Share

AIkids
Photo by Ron Lach : https://www.pexels.com/photo/mother-protecting-eyes-of-children-against-digital-content-9786320/


Just because sexually explicit images of kids that are flooding the internet are fake, it doesn’t make it OK. Jurgita Lapienytė reports 

AI-generated child sexual abuse material, or CSAM, has been flooding the internet, with law enforcement being increasingly frustrated that such material is nearly indistinguishable from “real abuse”.

But my question is this: just because it’s not a real video of a child suffering, should we dismiss it and avoid fighting against those distributing AI‑generated or deepfaked CSAM? Whether pranksters or criminal scum, we need to send a loud message that this is neither acceptable. Nor ist it legal.

The technology has offered abusers a cheaper way to profit from minors. And no matter how unpleasant this topic might be, or how overwhelmed law enforcement might feel, we need to talk about it and take action against AI paedophiles.

AI-Generated CSAM Epidemic

A couple of types of such material are circulating online. The video can be generated from start to finish, meaning all the abuser needs is a good AI tool and a detailed prompt. Another prevalent type of such content is deepfake CSAM.

Unlike simple AI‑generated files, these contain certain real elements — for example, a picture of a child might be swapped with someone’s face from an existing pornographic video. While the end product isn’t real, it has a real effect on real people.

A report published by the Internet Watch Foundation on July 11th revealed a whopping 400% rise of AI-generated child sexual abuse imagery in the first six months of 2025.

In total, 1,286 AI videos were found, and 1,006 of them were assessed as the most extreme, meaning they depicted rape, sexual torture and bestiality.

In the US, the National Center for Missing & Exploited said it received 485,000 reports of AI-related CSAM. And that’s only what’s been reported, meaning it’s just the tip of the iceberg.

Just because this material isn’t real, should we care any less? I would also suggest ditching this fake‑or‑real separation. Even when those nasty videos are generated, they harm childhood deeply, and they promote paedophilia.

Easier to Look Away Than Fight

The technology is evolving at a breakneck speed, making it harder and harder to distinguish between what’s real and what’s fake. According to the IWF, “Full feature-length AI films of child sexual abuse will be ‘inevitable’ as synthetic videos make ‘huge leaps’ in sophistication in a year.”

The more such videos surface on the internet, the more work law enforcement has. And given the sophistication of such content, it takes longer to spot a (deep)fake. For defenders, as humans, it probably also doesn’t matter whether it’s real or fake. Having to review such material over and over again, looking for clues, takes a toll on a person.

Law enforcement is not alone in this fight. There are a lot of volunteers and hacktivist organizations like Anonymous that engage in the so-called pedo-hunting. But the fight is exhausting as those people are risking to be investigated themselves, and also complain about the effect on their mental health.

The Role of Tech Companies

Seemingly harmless and fun technology is causing pain to millions of victims and their parents. What can be done to make it harder for criminals to use these tools? Or will we keep telling ourselves that criminals abuse legitimate tools and platforms, and that it’s not their fault?

AI-powered “nudifying tools” are openly marketed on social media and elsewhere. Just as with pornography, it could be a fun and meaningful way for consenting adults to interact. However, these tools are fuelling blackmail with abusers threatening to release explicit content online if their demands, financial or other, are not met.

There’s been calls to ban AI-powered nudifying apps. It might help to at least slow down the epidemic. However, it is not the answer. If simply banning every app that abusers exploit to groom children was the answer, then we could start with Roblox which is now being sued for allegedly turning a blind eye to child exploitation on its platform.

We as society can put more pressure on such companies to react and for the governments to wake up and help protect their citizens online. To a huge extent, it is working.

Roblox is now using AI to detect predatory language, and countries like the UK are banning AI-generated CSAM (apparently, many people think it’s legal) and introducing age verification laws.

Closing Word

In nature, when a baby is born, its mother becomes an invincible predator — ready to walk on water, move mountains, and hunt down anyone she perceives as a threat.

That’s how we need to fight to keep our kids safe online.

There’s a disturbing argument out there: if paedophiles tap into the AI-generated material, they won’t need to snatch real kids off streets to make such videos. But just as adult pornography can distort minds and fuel dangerous behaviour, so do AI‑generated sexualized images of children.

Instead of trying to find an excuse to turn a blind eye to the AI paedophiles, we need to fight harder.

JurgitaABOUT THE AUTHOR 

Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts 


Discover more from Tech Digest

Subscribe to get the latest posts sent to your email.

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments