Connect with us

Tech Talk

ChatGPT Misuse: Uncovering The Hidden Threats In Cyberspace – FCRF & Bennett University Webinar

Published

on

ChatGPT Misuse: Uncovering The Hidden Threats In Cyberspace - FCRF & Bennett University Webinar

NEW DELHI: The Future Crime Research Foundation (FCRF) and Bennett University held a highly informative and engaging webinar on the potential risks of ChatGPT in cyberspace. The virtual event, held on January 23, featured a panel of experts who discussed various aspects of ChatGPT and its impact on cybersecurity.

The expert panel for the discussion included Air Vice Marshal (Dr) Devesh Vatsa VSM Retd, a renowned expert on cyber defense and information security, Dinesh Bareja, Founder & COO: Open Security Alliance, Samir Datt, founder and CEO of Forensics Guru.com, Prof Triveni Singh, SP, Cyber Crime, UP Police, Nuthan Gowda, Co-Founder of Expanz and Dr. Tapas Badal, Associate Professor, School of Computer Science Engineering and Technology, Bennett University.

WATCH THE COMPLETE WEBINAR

The webinar was started by Air Vice Marshal (Dr) Devesh Vatsa VSM Retd, a renowned expert on cyber defense and information security. He provided a comprehensive overview of the potential risks of ChatGPT, including misinformation, privacy concerns, bias, cybersecurity threats, and job displacement.

“Users are currently using it for fun as well as for creating baking recipes, interior designs with text-to-image generators, writing lyrics, writing contents, writing essays, solving a complex mathematical problem, doing assignments, writing malware etc. The technology is both awesome and terrifying. The situation will become alarming as the usage of ChatGPT increases in the near future. How many jobs will this technology kill? Will this empower nefarious actors and gutter corrupt our public discount se? How will this district our education system? What is the point of learning to write an essay at school when AI which is expected to get exponentially better in the near future, can do that?” said Air Vice Marshal (Dr) Devesh Vatsa.

Vatsa explained that ChatGPT can be used to write malware and now even novice cyber criminals can become professional hackers.  Since it can learn the malware created will become so highly sophisticated that it could not be detected easily. ChatGPT can be used in propaganda as fake news could easily be created. This can change the narrative of people. Deep fakes cold be created.

Air Force veteran said ChatGPT could be used to write phishing emails as it has the ability to create content without grammatical errors. This will make an email look more authentic and the chances of being opened by the receiver increases.

ChatGPT could also be used to help facilitate scripts used by dating and romance scammers when trying to convince their potential victims to part ways with their money or cryptocurrency.

He emphasized that many of us lost our will even our ability to remember phone numbers when cell phones came along. By outsourcing memorization to machines, we have become dependent on it to call our friends & families. Now humanity faces the prospect of even greater dependence on machines. It’s possible we are heading towards a world where an even larger swath of the popular loss their ability to write well.

Air Vice Marshal (Dr) Devesh Vatsa.

Highlighting the positive aspect Vatsa said, “Despite inherent cyber security concerns, this tool has greater potential to be used for good. It can be effective at spotting critical coding errors, describing complex technical concepts in simplistic language, and even developing script and resilient code, among other examples. Researchers, practitioners, academia, and businesses in the cybersecurity industry can harness the power of ChatGPT for innovation and collaboration. It will be interesting to follow this emerging battleground for computer-generated content as it enhances capabilities for both benign and malicious intent.”

Second panelist, Dinesh Bareja explaining the potential threats said novices getting easy access to a tool which can help create malware and also provide them with the tactics, techniques and procedures for using it successfully.

Dinesh Bareja, Founder & COO: Open Security Alliance

“Rampant use in across all domains where one may see research papers, presentations, knowledge sessions filled with ChatGPT content, with academia and students being the biggest losers as their learning will increasingly depend on AI-based output which is bound to have built in biases that will stifle (or erase) independent thought with time,” Bareja said.

Explaining the way forward he said we must develop a national-level strategy and policy for the adoption of AI solutions across the government and private sectors.

Bareja said it is important to define a code of conduct and ethics for the development, deployment and use of such solutions with management and use of biases. India must build national-level GPTs on own data.

He said the country must provide funding for research – provide funding to institutes and individuals who are actually doing some work and to make sure the output is not just an ornament for a journal but a workable solution.

Samir Datt, the founder and CEO of Forensics Guru.com, provided insights into the forensic challenges posed by ChatGPT in the field of cybercrime investigation. He discussed how ChatGPT can be used to automate or facilitate cybercrime activities and the steps that can be taken to prevent it.

Explaining how to stop the potential Misuse of ChatGPT in Cyberspace Dr Tapas Badal said to educate people on the potential dangers of GPT3 and other AI-based technologies. This could include highlighting potential cases of misuse and the potential implications of misusing GPT3.

Dr Tapas Badal
Dr Tapas Badal

Dr Badal explained governments can implement laws and regulations that restrict or prevent the misuse of GPT3. This could include requiring proper authentication for accounts using GPT3, or ensuring that GPT3 is only used for its intended purpose.

“Companies and organizations should have proper monitoring systems and processes in place to detect and respond to any suspicious activity with GPT3. Companies should be transparent about the use of GPT3 and the risks associated with it. This could include providing clear guidelines for responsible use and warning users of potential misuse,” said Dr Tapas Badal.

He highlighted technical solutions such as captcha codes, tokenized authentication, and other measures can be used to help combat the potential misuse of GPT3.

The webinar was attended by a diverse audience of cybersecurity professionals, researchers, and the general public. The attendees had the opportunity to ask questions and engage in interactive discussions with the panel of experts. The experts provided valuable insights and answered the questions with great precision.

The FCRF, a non-profit organization dedicated to promoting cyber resilience and the responsible use of technology, believes that it is important to educate the public about the potential risks of ChatGPT in cyberspace. “As ChatGPT is becoming more prevalent in various industries and sectors, it is crucial that we understand the potential risks and take steps to mitigate them, ” said Shashank Shekhar, co-founder, FCRF.

The event was well received by all attendees and has been praised for providing valuable insights into the potential risks of ChatGPT in cyberspace. The FCRF plans to hold more such events in the future to raise awareness about the responsible use of technology. For any query and feedback FCRF team can be reached at research@futurecrime.org.

Follow The420.in on

 Telegram | Facebook | Twitter | LinkedIn | Instagram | YouTube

Continue Reading