Skip to content

“Will I be Believed?” How deepfakes risk eroding kids’ confidence in the people around them

September 26, 2024

5 Minute Read

Imagine a 14-year-old boy sitting in his bedroom, staring at his computer screen, his heart pounding. He’s just received a message from someone he thought was a friend — now they’re threatening to share a sexual photo of him if he doesn’t pay them money. Panicked, he exits the chat. But the alerts ping on his phone. As he tries to process what’s happening, the demands come faster. They tell him his life will be ruined if the image is shared. 

He’s so confused. He thinks to himself: “I never took that photo.” 

The image, it turns out, is a deepfake — a highly realistic but fake image created using generative AI. It looks so real he fears his parents will never believe him. So, he decides to pay to make the threats go away. Yet, doing so only sparks larger demands. Trapped and scared, he doesn’t know where to turn for help. 

Deepfakes are being used in financial sextortion

The boy’s experience is part of a surge in online financial sextortion — a form of blackmail where victims are coerced into paying money to prevent the release of intimate images or videos. Targeting primarily boys aged 14 to 17, it’s a crime that leaves victims feeling isolated and helpless. 

Bad actors disguised as flirtatious girls often coerce teen boys into sending nude images of themselves. Yet, with the emergence of generative AI, they no longer need to take that step. Instead, in a matter of minutes, they can manufacture an explicit image that appears to be of the victim.

Thorn’s research, in collaboration with the National Center for Missing and Exploited Children (NCMEC), found that about 10% of financial sextortion reports last year involved images that were not authentic.

Many minors don’t disclose their experience

Deepfakes are complicating an existing problem: Already, many children don’t disclose these types of experiences. Thorn’s research with youth found that 1 in 6 minors who experience an online sexual interaction never disclose it to anyone.

Boys, in particular, may be less likely to disclose being victims of sexual crimes, often due to societal expectations and gender norms that discourage them from speaking out. 

When deepfakes are involved, the fear of not being believed can intensify, creating an even bigger barrier to seeking help.

How do we mitigate this risk? 

It’s not the sole responsibility of young people to protect themselves from these threats — or to make the case that they’ve been harmed. Mitigating financial sextortion risks and encouraging youth to seek help if they’re victimized requires a multilayered approach that combines awareness, support resources, and technology:

Raise awareness

Both children and their caregivers must be made aware of these risks, and the full range of possible tactics bad actors use to sextort youth. Our resource guide on navigating deepfake nudes helps parents have an on-going dialogue with their children about this online safety risk. 

Understanding children’s real experiences, as well as trends in digital threats is the first step for caregivers and the child safety ecosystem.

Thorn for Parents provides resources on online risks as well as conversation starters for having open, judgment-free dialogues with children. 

Diversify support resources

Open, ongoing conversations about online safety are essential, but we must also recognize that children might not always turn to parents and caregivers first. Therefore, we must reduce the barriers to disclosure. 

As a family, it’s important to create a strategy that includes other trusted adults, peer support, and familiarization with the safety tools available on the platforms young people use. Thorn’s NoFiltr youth prevention program allows youth to engage with their peers on these important topics.

Deploy technologies

While awareness and resources are key, as measures, they still place the burden on youth to avoid, rebuff, and endure these risks. More must be done at the technology level to mitigate these threats further upstream. Digital platforms must build safer online environments by deploying scalable technologies that proactively identify threats. This is why Thorn has developed solutions like Safer Predict, which allows teams to detect new child sexual abuse material (CSAM), as well as conversations that may indicate child sexual exploitation. 

Detecting suspicious behavior and signals can have exponential effects. Take, for example, an account that’s been blocked by 100 different teenager profiles in the last two weeks. By analyzing the networks associated with that problematic account, platforms can potentially dismantle whole networks of offenders and reduce the threat to entire communities, not just individual users. 

As a result, law enforcement can be better equipped to activate effective investigations and prosecutions of organized networks — rather than playing a game of whack-a-mole each time a new offender is identified.

Still, safety tools like blocking and reporting remain necessary. At Thorn, we know youth are twice as likely to disclose an unwanted sexual interaction through a safety tool than to tell a parent or guardian. Yet, online environments should better ensure children aren’t put in these risky situations in the first place.

“Don’t share nudes” is insufficient advice 

The rise of deepfakes in financial sextortion cases underscores the urgency of this multilayered approach. Bad actors can take benign images from a victim’s social media and use AI to create fake nudes. The advice to youth of “Don’t share nudes” can be insufficient since kids may share nudes out of curiosity or peer pressure, and a negative tone can isolate a child with feelings of shame. Now, deepfakes only magnify the shortcomings of this message. Children who have never taken and shared an explicit image of themself can now be easily targeted by sextortionists using generative AI image creation.

Financial sextortion happens online every single day. Its effects have led to severe consequences among young victims, including self-harm. Understanding why a child might be reluctant to seek support, and taking action to reduce those barriers can truly save lives. More powerful, is detecting and addressing the threat before a child is ever in the position of asking themself, “Will I be believed?”



Stay up to date

Want Thorn articles and news delivered directly to your inbox? Subscribe to our newsletter.