
Dark AI is fueling cybercrime – and accelerating the cybersecurity arms race
By Mike Wehner | Published: 2025-10-24 13:00:00 | Source: The Future – Big Think
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
In June 2023, just seven months after OpenAI first invited curious tech enthusiasts to try out a “research preview” of the now-ubiquitous ChatGPT tool, a lesser-known chatbot called WormGPT It officially launched with a completely different target audience: hackers.
Its creator offered potential customers access to a large language model (LLM) with no built-in guardrails—one that wouldn’t back down when asked to do something nefarious, like create a phishing email, write malware code, or help with a phishing scheme. It was later claimed that more than 200 users paid upwards of €500 (about $540) per month for the tool, with some paying up to €5,000 ($5,400) for a private, full-featured installation.
WormGPT was officially shut down a few months after its launch, around the same time as security researcher Brian Krebs. He wrote a lengthy exposition Which revealed the identity of its creator, Rafael Moraes. Moraes, who said the majority of WormGPT was not specifically encrypted by him, claimed that the tool was meant to be neutral and uncensored, and not explicitly malicious. He has never been charged with a crime, and it’s not clear how much damage, if any, WormGPT users have done to the world.
In the more than two years since then, unfettered AI models and AI-driven cybercrime tools — loosely grouped together under the term “dark AI” — have increased in number and popularity, with creators primarily using the dark web to communicate with their target customers: people eager to cause harm, conduct scams, or steal information and identities for profit. FraudGPT, which launched just a month after WormGPT, has reportedly registered over 3,000 paid subscribers, while DarkGPT, XXXGPT, and Evil-GPT have all enjoyed varying levels of success. The WormGPT name itself has been taken over by other dark AI models as well, including keanu-WormGPT, which uses a jailbroken version of X’s Grok.
So, what do we know about dark AI, and what can we do about it?
Dark AI vs. Misused AI
Mainstream generative AI tools have guardrails against malicious uses, but they also contain vulnerabilities that allow people to bypass these guardrails.
If you ask the current version of ChatGPT to create a template for a phishing email, for example, it will politely decline. However, if you explain that you’re writing a fictional story about a scammer and want to include an example of an email that this not-so-real person might create, I’d be happy to create one for you. Interagent trust—an AI agent’s tendency to trust other AI agents by default—can also be exploited to get around guardrails built into common systems. A He studies Shared on a preprint server in July 2025, arXiv found that 14 out of 17 high-end LLM programs it tested were vulnerable to this type of exploit.
“Anyone with a computer…plus some technical knowledge can host the MBA and configure it for specific purposes.”
Crystal Morin
Tech-savvy criminals can also use many publicly available open source LLM programs as the basis for their unconstrained models, training them on broad sets of malicious code, data from malware attacks and phishing exploits, and other information that a bad actor (or cybersecurity researcher) would likely find valuable.
“Anyone with a computer, especially with a GPU, as well as some technical knowledge, can host an MBA and tune it for a specific purpose,” he says. Crystal Morina senior cybersecurity strategist Sisdig And a former intelligence analyst for the United States Air Force. “This is exactly how threat actors avoid the safeguards built into the most common generic models.”
“I know, for example, security practitioners who are experimenting with on-premises models, adapting them to different use cases,” she adds. “They couldn’t find a task where the AI couldn’t provide some sort of practical outcome.”
Tuning an open source model or bypassing a chatbot’s AI health checks requires at least a cursory understanding of how these systems work. The dark AI models available to online actors are even more dangerous because they completely lack guardrails — there is nothing to circumvent — and are already set up for malicious uses. A low-level criminal doesn’t need a lot of technical skills to take advantage of these AI tools, they just need to be able to write a direct claim for anything they want, and the dark AI will execute it.
Fight fire with fire
Security analysts have warned businesses, governments and the general public that hackers are stepping up their attacks in the wake of widespread adoption of artificial intelligence. Statistics support their claims: in the past two years, Ransomware attacks has risen, Exploits the cloud It has increased, and Average cost of a data breach It has reached an all-time high.
The simple fact that generative AI allows users to do more in less time means that hacking is now more efficient than ever, and the unfortunate truth is that we can’t undo the bell of AI.
“AI threats will continue, just as attackers will continue to innovate — and that’s exactly what they are doing,” Morin says. “Some cybercriminals even hold day jobs in cybersecurity, so they have the same skills and know defensive security inside and out. What matters is how defenders evolve and respond.”
But just as dark AI makes it easier for bad guys to launch attacks, other AI tools are helping security experts fight the battle head-on.
“Cybersecurity has always been an arms race, and AI has increased the risks.”
Crystal Morin
Microsoft, OpenAI, and Google are among the companies actively developing AI tools to prevent AI — in some cases, models they developed themselves — from being used for malicious purposes.
Microsoft Threat Intelligence Recently closed A massive phishing campaign believed to have been carried out with the help of artificial intelligence, and OpenAI has taken its fight to AI-generated images with a security tool that researchers can use. Detect fake photos. Google spent Much of this last summer Highlighting AI-based tools that developers can use to prevent AI from negatively impacting users. Meanwhile, Google DeepMind has proven that proactive defense against AI-based threats works in the real world The big sleepartificial intelligence that tests systems for vulnerabilities. It has already been determined Glaring loopholes in popular softwareincluding the Chrome web browser, and its success suggests that widespread automatic correction of security flaws may be just around the corner.
Red teaming — a practice in which ethical hackers test a system for vulnerabilities so they can be remedied before an actual attack — dates back to the Cold War, but it has taken on new meaning in generative AI circles. Cybersecurity experts are working now Complex simulations AI is tasked with the role of a toxic attacker, allowing organizations to test how their AI systems react to provocations.
AI is also adept at pattern recognition, which gives AI-based protection systems an advantage in detecting things like fraudulent email campaigns and phishing attacks, which can be frequent. Companies can deploy these tools to keep their employees safe from hacking attempts, while email and messaging providers can integrate them into their systems to prevent spam, malware, and other threats from ever reaching users.
“With attackers moving at the speed of AI, we have to adopt a real-time ‘assume compromise’ mentality to stay ahead – with our own trustworthy AI,” Morin says. “Cybersecurity has always been an arms race, and AI has increased the risks.”
Even the most sophisticated AI-based defenses face a fundamental and troubling challenge: the law itself. The torrent of new attacks can only be stopped by targeting the source, and legal frameworks are still keeping pace with the age of modern technology, especially with regard to artificial intelligence.
Stripping AI of safeguards and training it to help you (or someone else) write malware or create a convincing phishing email may be unethical, but it’s not necessarily illegal — AI researchers and cybersecurity analysts do it as part of their work. While the law attempts to make a clear distinction between “good faith” development and malicious end-use intent, it is not entirely clear.
Creating malware and sending phishing emails could land you in prison, but creating artificial intelligence that can do those things isn’t a crime, at least under federal law. It’s no different than buying a radar detector for your car. It’s not illegal to own the device, but in certain places, it’s a crime to be caught using it. This may be why there have been no high-profile convictions of dark AI creators and vendors to this point — law enforcement’s focus remains on those who use these tools for nefarious means.
The race continues
The emergence of dark AI is a worrying new development in cybersecurity, but it is not unprecedented. The history of digital security has been defined by the innovative defenses that have emerged to counter the most advanced threats. AI-powered attacks are the next chapter.
What makes this moment unique is the speed with which both sides are moving forward. Criminals can now launch large-scale attacks with virtually no due process, and defenders can use real-time AI to detect and shut down risks before they can impact users. Every time this happens – and it keeps happening – both sides get a little smarter.
We may not be able to ring the bell, but we can work toward a future where, for every nefarious AI tool or broken MBA that reaches a dark web forum, there is an innovative and timely defense to neutralize it.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ
 





