FraudGPT: AI-Equipped Cybercriminals Have Arrived

Home // Resources // FraudGPT: AI-Equipped Cybercriminals Have Arrived

Large Language Models (LLM) like ChatGPT have sparked a slew of articles and research into how AI will change the way we live and work. Already, there’s a proliferation of AI assistants that use LLMs to facilitate the writing process, help with data analysis, and more, with the expectation that AI will be used even more widely by people across nearly all industries.

Unfortunately, like so many tools that were created with (and that deliver on) positive intent, cybersecurity experts have found that LLMs have become a tool that cybercriminals can leverage to improve the quality of their attacks. Tools like FraudGPT and WormGPT are now available on the dark web as a way for criminals to boost the efficacy of their attacks, and that’s left many cybersecurity professionals wondering what they can do to protect their organizations from attack.

But First, What is FraudGPT?

FraudGPT and related LLMs like WormGPT are, at their core, unrestricted generative AI models. What that means is that they can consume massive amounts of unstructured data to learn how to create an adaptive, interactive interface for criminals to use without the safety protocols (“guardrails”) put in place with OpenAI’s ChatGPT, Microsoft’s Copilot, or Google’s Bard.

In addition to lacking the guardrails that typically come with popular LLMs, there’s some indication that malicious (“black hat”) LLMs like WormGPT don’t require human input the way that ChatGPT does, which allows them to learn continuously from data across the internet to create far more subtle responses than the white hat versions.

The reason why the lack of guardrails and open source nature of the data collection are dangerous is that they can effectively act as a turnkey cyberattack tool. FraudGPT’s technology is fairly rudimentary at the moment, but any criminal that subscribes to the service will have access to a library of phishing emails and malware code that will allow someone with no previous skill to spin up an attack campaign that lacks the clear hallmarks that most employees are familiar with.

How FraudGPT Changes the Cybersecurity Landscape

While LLMs like ChatGPT have created enormous opportunities for businesses and individuals to benefit, it’s important to recognize that their very existence does pose a risk as well. Even before the release of FraudGPT, fraudsters could have leveraged ChatGPT’s prompts to create an attack plan or series of phishing emails that could be used to attack organizations.

Now, with the release of FraudGPT and WormGPT, there’s been a shift, but – so far – only slightly from where we were before. They are more dangerous because of the reasons cited before, but they’re more an omen of what’s to come. Technology (especially of this type) advances exponentially, which makes it likely that code and content generated by black hat LLMs will get increasingly difficult to detect, not just by employees but potentially by cybersecurity measures if they aren’t kept up to date.

Is There a Way to Defend Against FraudGPT?

Knowing that a tool like FraudGPT exists can be frightening to anyone, because it makes it clear that no one can truly be 100% safe from a cyberattack. It’s worth noting, though, that even before FraudGPT, the question was never if an organization’s business could be breached, but rather was how long it would take before that breach occurred. Even the most secure systems in the world can’t prevent a cyber attack; they can only delay it.

So, how do you defend against FraudGPT-equipped attackers? Since the software is in its infancy, many of the current tools you’d use can be effective. Apart from the standard defenses like firewalls and VPNs, it’s critical that organizations have a few things in place:

Having updated policies and procedures, modern cyber defense infrastructure, and a plan for what to do after an attack are the best defenses any organization has against cybercriminals of every stripe.

The AI Future of Cybersecurity

Regardless of whether someone comes down on the side of optimism or pessimism about their impact on society, generative AI tools are here to stay, and they are already changing the landscape for cybersecurity professionals. While the tools available today to protect your IT environment may be sufficient, cyber threats will continue to become more sophisticated as time goes on, and that will require cybersecurity professionals to do the same.

With the introduction of AI into the cyber threat landscape, many security professionals are looking at ways of integrating AI into defensive measures to protect their data, including using AI-based defense systems. Microsoft Security Copilot and Cisco’s AI-first Security Cloud are examples of the new era of cybersecurity tools that are leveraging the promise of AI to better protect organizations against attacks. These tools have been in development for years, and have advanced exponentially over the past two years. Over time, they will become a growing part of the toolbox used by cybersecurity experts everywhere.

Not sure how prepared your organization is for the future of cybersecurity? Talk with an expert about how to prepare.