The cyber threat landscape is continually evolving, with threat actors leveraging advanced technologies to carry out sophisticated attacks. One such development is the emergence of FraudGPT, a new artificial intelligence (AI) tool designed for cybercrime. This tool is being advertised on various dark web marketplaces and Telegram channels, and it’s tailored for offensive purposes, such as crafting spear phishing emails, creating cracking tools, and carding.

What is FraudGPT?

FraudGPT is a generative AI bot that follows in the footsteps of WormGPT. It’s being sold for a subscription cost of $200 a month, with options for six months at $1,000 and a year at $1,700. The actor behind this tool, who goes by the online alias CanadianKingpin, claims that FraudGPT can write malicious code, create undetectable malware, find leaks and vulnerabilities, and has more than 3,000 confirmed sales and reviews.

Proposed Features

FraudGPT is designed with a wide range of capabilities to aid in cybercrime. Some of the proposed features include:

  1. Writing Malicious Code: The tool can generate harmful code that can be used to exploit vulnerabilities in systems or to create malware.
  2. Creating Undetectable Malware: FraudGPT claims to be able to produce malware that can evade detection by traditional security systems.
  3. Finding Non-VBV Bins: Non-Verified by Visa (VBV) bins are credit card numbers that do not require additional verification and can be exploited for fraudulent transactions.
  4. Creating Phishing Pages: The tool can generate convincing phishing pages to trick users into providing sensitive information.
  5. Creating Hacking Tools: FraudGPT can generate tools that can be used to exploit vulnerabilities or carry out attacks.
  6. Finding Groups, Sites, Markets: The tool can identify potential targets for cybercrime, including online groups, websites, and marketplaces.
  7. Writing Scam Pages/Letters: FraudGPT can craft convincing scam pages or letters to trick victims into falling for scams.
  8. Finding Leaks, Vulnerabilities: The tool can identify potential security vulnerabilities or data leaks that can be exploited.
  9. Learning to Code/Hack: FraudGPT can provide guidance on coding and hacking techniques.
  10. Finding Cardable Sites: The tool can identify websites that are vulnerable to carding, a form of credit card fraud.
  11. Escrow Available 24/7: The tool offers a secure transaction service, or escrow, to ensure the safe exchange of goods and services.

The Threat Landscape

The advent of AI tools like FraudGPT could take the phishing-as-a-service (PhaaS) model to the next level. They could act as a launchpad for novice actors looking to mount convincing phishing and business email compromise (BEC) attacks at scale, leading to the theft of sensitive information and unauthorized wire payments.

Generative AI models like FraudGPT are becoming increasingly attractive to cybercriminals. They can be exploited to launch sophisticated attacks, such as BEC attacks. While organizations can create AI tools with ethical safeguards, it isn’t a difficult feat to reimplement the same technology without those safeguards.

Information Logging and Its Implications

While the exact details of what information might be logged by the FraudGPT tool are not explicitly stated, it’s reasonable to assume that it could include data related to its usage. This could encompass details about the types of attacks carried out, the success rate of these attacks, and potentially even information about the targets. The data logged by the system could potentially provide a trail of digital breadcrumbs, aiding in attribution and prosecution.


The emergence of FraudGPT is a stark reminder of the evolving cyber threat landscape. As AI continues to advance, so too does the sophistication of cyber threats. Organizations must remain vigilant, employing robust security measures and staying abreast of the latest developments in cyber threats.

Further Reading

  1. The Hacker News: New AI Tool ‘FraudGPT’ Emerges, Tailored for Sophisticated Attacks
  2. Security Affairs: FraudGPT, a new malicious generative AI tool appears in the threat landscape