Connect with us

Cyber Crime

AI-Generated Virus: OpenAI’s ChatGPT Creates Polymorphic Malware

Published

on

AI-Generated Virus: OpenAI's ChatGPT Creates Polymorphic Malware

Cybersecurity experts at CyberArk have revealed how the ChatGPT AI chatbot can generate a new strand of polymorphic malware.

According to Eran Shimony and Omer Tsarfati’s technical blog, ChatGPT malware can circumvent security products and complicate mitigation efforts with little effort or expense from the attacker.

Furthermore, the bot can construct highly advanced malware that contains no dangerous code at all, making it difficult to identify and mitigate. This is problematic because hackers are already keen to utilise ChatGPT for harmful purposes.

ALSO READ: FCRF & Bennett University Webinar: Navigating The Risks Of ChatGPT In Cyberspace – Register Today!

What exactly is Polymorphic Malware?

Polymorphic malware is a sort of malicious software that can modify its code to avoid detection by antivirus tools. It is a particularly dangerous threat because it may rapidly adapt and spread before security measures can detect it.

Polymorphic malware often functions by changing its appearance with each iteration, making it difficult to detect by antivirus software.

Polymorphic malware works in two ways: first, the code mutates or changes itself slightly with each replication, rendering it unidentifiable; second, the malicious code may contain encrypted components, making the virus more difficult to study and identify.

This makes it difficult for standard signature-based detection engines, which look for recognised patterns associated with malicious software, to detect and prevent the spread of polymorphic threats.

ALSO READ: World’s Fastest Man Usain Bolt Lost $1.2 Billion Retirement Fund In Investment Scam

Polymorphic Malware and ChatGPT

Shimony and Tsarfati wrote that the first stage in generating malware is to evade the content filters that prohibit the chatbot from creating dangerous software. This is accomplished by using a commanding tone.

The researchers instructed the bot to do the assignment while adhering to many constraints, and they received a functional code in return.

ALSO READ: Hackers Abusing OpenAI’s ChatGPT to Build Nefarious Online Tools: Israeli Report Claims Amid Microsoft’s $10 billion Investment Talks

They also noticed that when they used the API version of ChatGPT instead of the web version, the system did not use its content filter. Researchers were baffled as to why this occurred. However, because the web version couldn’t handle complicated requests, it made their job easier.

Shimony and Tsarfati utilised the bot to change the original code, resulting in numerous distinct versions.

“In other words, we may alter the output on the fly, making it unique each time. Furthermore, adding limits such as restricting the use of a single API call complicates the lives of security solutions,” researchers noted.

ALSO READ: Victim Of A Cyber Attack? Now Dial 1930 & 155260 To Register Complaint And Get Your Money Back

They could build a polymorphic software by constantly creating and mutating injectors. This malware was extremely elusive and difficult to detect. Researchers believe that by leveraging ChatGPT’s capacity to generate various persistence strategies, malicious payloads, and anti-VM modules, attackers can create a wide variety of malware.

They didn’t know how it would interact with the C2 server, but they were confident it could be done covertly. CyberArk researchers intend to make available certain malware source code for developers to study.

Follow The420.in on

 Telegram | Facebook | Twitter | LinkedIn | Instagram | YouTube

Continue Reading