Connect with us

Tech Talk

Criminals Leveraging AI Tools to Rewrite and Obfuscate Malware, Evading Detection

Published

on

Cybersecurity researchers have unveiled a concerning trend: large language models (LLMs) are being leveraged to generate sophisticated variants of malicious JavaScript at scale, enhancing their ability to bypass detection systems.

According to a report by Palo Alto Networks’ Unit 42, while LLMs struggle to create malware from scratch, they excel at transforming existing malicious code into harder-to-detect variants. “Criminals can use LLMs to rewrite or obfuscate malware in ways that appear more natural, making detection significantly more challenging,” the researchers noted.

ALSO READ: FCRF Excellence Awards in Cyber Policing-  [Nominate for Cyber Policing Award]

 Weaponizing LLMs for Malware Obfuscation
Unit 42 demonstrated how LLMs can iteratively modify malicious code, employing techniques like variable renaming, string splitting, junk code insertion, and even full reimplementation of the code. This process generates up to 10,000 unique JavaScript variants without altering the malware’s functionality.

The study revealed that such transformations could degrade the performance of machine learning (ML) malware classifiers, tricking systems into misclassifying harmful code as benign. In fact, Unit 42’s experiments showed that their classifier’s accuracy dropped dramatically, flipping verdicts from malicious to benign 88% of the time.

What’s more alarming is that these rewritten scripts also evade detection on platforms like VirusTotal. Unlike traditional obfuscation tools such as obfuscator.io, LLM-generated rewrites produce more natural-looking code, making them harder to fingerprint.

Generative AI: A Double-Edged Sword
While malicious actors exploit LLMs for obfuscation, the same technology can be used to bolster defenses. “We can leverage these methods to generate diverse training data for improving ML models, enhancing their ability to detect new malware variants,” Unit 42 added.

Side-Channel Attacks on Google Edge TPUs
In another alarming development, researchers from North Carolina State University have uncovered a side-channel attack called TPUXtract, capable of stealing AI models running on Google Edge Tensor Processing Units (TPUs).

By analyzing electromagnetic signals emitted during neural network operations, the attack extracts hyperparameters such as layer configurations, filter sizes, and activation functions with 99.91% accuracy. This could enable attackers to recreate proprietary AI models or develop close surrogates.

However, TPUXtract requires physical access to the target device and expensive equipment, limiting its accessibility to well-resourced adversaries.

ALSO READ: FCRF Make in India Awards- [Nominate for Make in India Awards]

Manipulating Exploit Prediction Scoring Systems
Separately, Morphisec researchers have demonstrated vulnerabilities in AI-based risk assessment frameworks like the Exploit Prediction Scoring System (EPSS). The proof-of-concept attack manipulates EPSS by inflating external activity indicators such as social media mentions and GitHub repository activity.

For instance, creating random posts about a vulnerability on X (formerly Twitter) and uploading placeholder exploit files to GitHub increased the perceived risk score of a software flaw. The manipulated vulnerability’s ranking rose from the 41st percentile to the 51st, misleading organizations relying on EPSS for prioritizing threat management.

The Bigger Picture
These developments underscore the growing complexity of the cybersecurity landscape as AI tools become more integrated into both attack and defense strategies. While LLMs and other AI technologies offer immense potential, they also introduce new vulnerabilities that require proactive measures to mitigate.

As threat actors continue to innovate, the need for robust defenses and adaptive countermeasures has never been more critical.

Follow The420.in on

 TelegramFacebookTwitterLinkedInInstagram and YouTube

Continue Reading