Connect with us

Cyber Crime

Deepfake Dangers: FBI Sounds Alarm on Exploitative Sextortion Schemes

Published

on

NEW DELHI: Federal authorities have issued a warning about a recent increase in the use of sexualized deepfake images in a new wave of sextortion campaigns.

The Federal Bureau of Investigation (FBI) highlighted a rise in such attacks since April and urged individuals to be cautious when it comes to their online presence.

The FBI’s warning emphasized that victims of sextortion are increasingly reporting the use of fake images or videos created from content posted on their social media accounts, provided upon request, or captured during video chats.

ALSO READ: Search All India Police Station Phone Numbers & Mail ID Through This Search Engine

In these schemes, targets are coerced into paying money or face the threat of having deepfake images or videos shared with their family members or via their social media networks.

Sextortion, which involves blackmailing victims with the release of sexually compromising content, has been a prevalent cybercrime tactic.

However, threat actors have now taken it to the next level by utilizing deepfake technology to produce explicit images or videos that appear to feature the targeted individuals. Even though the content is fabricated, the fear of exposure can still cause significant harm or embarrassment to the victims.

According to the FBI, attackers typically begin by scraping content from victims’ social media profiles to create the deepfakes. Other sources of victims include individuals who are deceived into sharing their personal images and videos or discover that content captured during video calls has been maliciously repurposed.

ALSO READ: Looking For Nodal Officers Of Banks, Telecoms, Social Media? Click The Link Here To Fetch Numbers – Details Inside

The FBI’s Internet Crime Complaint Center (IC3) revealed that they continue to receive reports from victims, including minors and non-consenting adults, whose photos or videos were altered into explicit content. These manipulated media are often circulated on social media platforms or pornographic websites to harass the victims or facilitate sextortion schemes.

Even before the integration of deepfakes, sextortion was already a thriving industry for cybercriminals. In 2021 alone, the FBI reported that $8 million had been extorted from Americans within a span of seven months. Additionally, the FBI and partner agencies issued an alert earlier this year regarding a concerning increase in cases involving sextortion of children and teens, some of which tragically resulted in suicide.

Open-source software frameworks like DeepFaceLab have facilitated the creation of deepfakes, mostly by enthusiasts overseen by ethical communities.

However, these frameworks can be exploited when shared on the dark web. Major technology companies and social media platforms have taken measures to address unconsented deepfake production and distribution, aiming to reduce their effectiveness as tools for sextortion attacks.

In response, Google banned the training of AI systems that generate deepfakes on its publicly available platform, Colaboratory, while Meta (formerly Facebook) has been developing deepfake detection technology to combat the circulation of harmful content on its platforms. Security researchers and vendors are also working on similar solutions to stay ahead of threat actors.

ALSO READ: Victim Of A Cyber Attack? Now Dial 1930 & 155260 To Register Complaint And Get Your Money Back

The issue of deepfake pornography has plagued the internet for years, victimizing celebrities and creating a market for deepfake adult videos. The problem further intensified with the advent of AI-generated deepfake images. Numerous platforms have been discovered hosting explicit deepfake content, easily accessible through regular search engines, and even accepting payments via major credit cards.

To mitigate the damage caused by sextortion attacks, the FBI advises caution when posting or sharing personal information online. The bureau emphasizes that seemingly innocuous images or videos can provide malicious actors with ample content to exploit for criminal activities. They also recommend conducting regular searches for personal information and using reverse image search engines to identify any circulating photos or videos without one’s knowledge.

KEY HIGHLIGHTS

•             Federal authorities have observed an increase in sexualized deepfake images in sextortion campaigns since April.

•             Deepfake technology is being used to generate explicit content that appears to feature the targeted individuals, intensifying the threats.

•             Sextortion has become a growing industry, with millions of dollars extorted and numerous victims, including minors.

•             Congressman Joe Morelle introduced legislation to criminalize non-consensual deepfakes.

•             Tech companies and social media platforms are taking measures to combat the production and distribution of deepfakes.

•             Deepfake pornography has victimized individuals, with websites easily accessible and accepting payments through major credit cards.

•             The FBI advises caution regarding online presence and suggests running searches for personal information and reverse image searches to identify circulating content.

Follow The420.in on

 Telegram | Facebook | Twitter | LinkedIn | Instagram | YouTube

 

Continue Reading