Connect with us

Cyber Crime

World Under Attack by AI-Powered Cybercrimes: Report



Digital Deception: How Cyber Criminals Clone Celebrities to Promote Fake Investments

Former U.S. President Donald Trump posing with Black voters, Indian PM Narendra Modi purportedly crushing on Italian PM Giorgia Meloni, and President Joe Biden discouraging voting via telephone or the Pope sporting a puffy white jacket—these are just a few examples of deepfake videos, photos, and audio recordings that have proliferated across various internet platforms. This surge in manipulated media content is aided by the technological advancements of large language models like Midjourney, Google’s Gemini, and OpenAI’s ChatGPT.

With the right prompt fine-tuning, individuals can create seemingly authentic images or make the voices of prominent political, economic, and entertainment figures say anything they desire. While creating a deepfake is not inherently illegal, governments worldwide are increasingly considering stronger regulations to mitigate potential harm to the parties involved.

Apart from entertainment purposes, deepfake technology has been misused for nefarious activities such as creating non-consensual pornographic content involving predominantly female celebrities and committing identity fraud by manufacturing fake IDs or impersonating others over the phone. According to a report by identity verification provider Sumsub, cases of deepfake-related identity fraud have surged dramatically between 2022 and 2023 in various countries.

For instance, the Philippines witnessed a staggering 4,500 percent increase in fraud attempts, followed by nations like Vietnam, the United States, and Belgium. Pavel Goldman-Kalaydin, Sumsub’s Head of Artificial Intelligence and Machine Learning, warned of the growing sophistication of deepfakes and their potential to evolve into new forms of fraud, including voice manipulation. He stressed the importance of adopting multi-layered anti-fraud solutions to combat synthetic fraud effectively.

These concerns are shared by cybersecurity experts, as highlighted in a survey conducted during the World Economic Forum Annual Meeting on Cybersecurity in 2023. The survey revealed that 46 percent of respondents identified the advancement of adversarial capabilities, including deepfakes, as a significant risk to cybersecurity in the future.

The implications of deepfake technology extend beyond cybersecurity, posing challenges to privacy, misinformation, and the ethical use of AI. As governments and organizations grapple with these challenges, proactive measures and collaborative efforts will be crucial in addressing the growing threat of deepfakes.

This story is based on data and insights provided by Statista, a leading provider of market and consumer data.

Follow on

 Telegram | Facebook | Twitter | LinkedIn | Instagram | YouTube

Continue Reading