Cyber criminals are exploiting a powerful chatbot developed by Artificial Intelligence (AI) firm Open AI, an Israeli cyber security company has claimed.
The revelation comes at a time when tech giant Microsoft is in talks for $10 billion investment in Open AI, which has developed ChatGPT, an app which has become quite popular since its launch in November 2022.
“Hackers are using ChatGPT to develop powerful hacking tools and create new chatbots designed to mimic young girls to lure targets,” Israeli security firm Check Point has claimed in a new report.
What Is ChatGPT?
According to San Francisco-based OpenAI, they have trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
Hackers Abusing ChatGPT to Build Nefarious Online Tools
According to the Check Point report, hackers are using ChatGPT to develop powerful hacking tools and create new chatbots designed to mimic young girls to lure targets.
“ChatGPT can also code malicious software that can monitor users’ keyboard strokes and create ransomware. For your information, ChatGPT has been developed by OpenAI as an interface for its LLM (Large Language Model),” the report claimed.
However, cybercriminals have somehow figured out a way to make it a threat to the cyber world since its code generation capability can easily help threat actors launch cyberattacks.
On the other hand, Hold Security’s founder Alex Holden claimed he has observed “dating scammers” exploiting ChatGPT to create convincing personas.
“Scammers are creating female personas to impersonate girls to gain trust and have lengthier conversations with their targets,” he claimed.
Read Prof Triveni Singh, SP Cyber Crime, Uttar Pradesh police on ChatGPT.
Check Point’s researchers explained in their blog post that an attacker could create an authentic-looking spear-phishing email to run a reverse shell that can accept commands in English.
“Many underground hacking forums have posted about incidents where cybercriminals used OpenAI to develop malicious tools, even those having no development skills. In one of the posts that Check Point reviewed, a hacker shared an Android malware code written by ChatGPT, which could steal desired files, compress them, and leak them on the web,” it stated.
“One user shared how they abused ChatGPT by using it to create code-up features of a marketplace on the Dark Web, like Silk Road and Alphabay,” it noted.
The Israeli cyber security firm claimed that another tool was posted on the forum that could be used to install a backdoor on a device and upload more malware onto the compromised computer while a user-shared Python code capable of file encryption using the OpenAI app.
Check Point researchers observed that this was the first tool this user had created.
“Such codes could be used for malicious purposes and modified to encrypt a device without involving user interaction, just like with ransomware,” the researchers said.
“Moreover, scammers can also use ChatGPT to build bots and sites to trick users into sharing their information and launch highly targeted social engineering scams and phishing campaigns,” the researchers feared.
Follow The420.in on