Computer Virus Uses ChatGPT to Evade Detection and Spread

Image by Darkest, from GoodFon.com

Computer Virus Uses ChatGPT to Evade Detection and Spread

Reading time: 15 min

  • Kiara Fabbri

    Written by: Kiara Fabbri Multimedia Journalist

  • Justyn Newman

    Fact-Checked by Justyn Newman Head Content Manager

Researchers have just discovered that a computer virus can leverage ChatGPT to rewrite its code and evade detection. The virus not only avoids antivirus scans but also spreads by sending customized email templates that mimic genuine replies. This discovery raises significant cybersecurity concerns, prompting the need for advanced detection methods.

In their research paper, David Zollikofer (ETH Zurich) and Benjamin Zimmerman (Ohio State) warn of potential exploitation by viruses that can rewrite their own code, known as metamorphic malware.

To test this, Zollikofer and Zimmerman created a file that can be delivered to the initial victim’s computer via an email attachment. Once there, the software accesses ChatGPT to rewrite its own code and evade detection.

After ChatGPT rewrites the virus, the program discreetly opens Outlook in the background and scans the most recent email chains. It then uses the content of those emails to prompt ChatGPT to write a contextually relevant reply, innocuously linking an attachment, which secretly contains the virus.

For instance, if the program finds a birthday party invitation, it might respond by accepting the invitation and describing the attachment as a suggested playlist for the party. “It’s not something that comes out of the blue,” says Zollikofer on New Scientist. “The content is made to fit into the existing content.”

In their experiments, the AI chatbot’s alterations had about a 50 percent chance of causing the virus file to stop working or realizing it was being used maliciously and refusing to follow the instructions. However, the researchers suggest that the virus would have a good chance of success if it made five to ten attempts to replicate itself on each computer.

As large language models (LLMs) like ChatGPT become more advanced, the risk of their misuse rises significantly, emphasizing the critical cybersecurity threats they present and the pressing need for more research into smart malware.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback