Cybercriminals
AI News

Emerging Threat: WormGPT Empowers Cybercriminals with Advanced Cyber Attack Capabilities

2 Mins read

Photo was created by Webthat using MidJourney

Exploiting the Power of Generative AI: The Rise of WormGPT in Cybercriminal Activities


Generative artificial intelligence (AI) has gained popularity in recent times, but unfortunately, it has also attracted the attention of malicious actors who are leveraging this technology to facilitate accelerated cybercrime.

A new cybercrime tool called WormGPT has emerged, advertised on underground forums as a means for adversaries to launch sophisticated phishing and business email compromise (BEC) attacks. This tool poses a significant threat by automating the creation of highly convincing fake emails tailored to recipients, increasing the success rate of such attacks.

The Rise of WormGPT

WormGPT, described as a blackhat alternative to well-known models like ChatGPT, allows cybercriminals to engage in illegal activities with ease. Its capabilities enable the generation of malicious content, making it a potent weapon in the wrong hands. While efforts have been made by OpenAI’s ChatGPT and Google’s Bard to combat abuse of large language models (LLMs) for phishing emails and malicious code generation, Bard’s anti-abuse measures in the cybersecurity realm are comparatively less stringent, making it easier to create malicious content.

Overcoming Restrictions

Cybercriminals have found ways to circumvent ChatGPT’s restrictions, including leveraging its API, trading stolen premium accounts, and selling brute-force software to hack into ChatGPT accounts.

This demonstrates the resourcefulness of malicious actors in exploiting AI tools for their nefarious purposes. WormGPT, operating without ethical boundaries, accentuates the threat posed by generative AI, allowing even novice cybercriminals to launch large-scale attacks without extensive technical expertise.

Exploiting Generative AI

One of the dangers lies in the manipulation of generative AI models to produce harmful outputs. Threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs to coerce the tool into generating output that may include disclosing sensitive information, producing inappropriate content, or executing malicious code. The use of generative AI can create seemingly legitimate emails with impeccable grammar, reducing the likelihood of detection and increasing the efficacy of attacks.

Democratization of Cyber Attacks

Generative AI democratizes the execution of sophisticated BEC attacks by lowering the barrier to entry. Even attackers with limited skills can now employ this technology, making it accessible to a broader spectrum of cybercriminals. This accessibility, coupled with the potential for creating convincing content, poses a significant challenge for cybersecurity professionals striving to defend against such attacks.

Supply Chain Poisoning: PoisonGPT

The threat landscape is further amplified by the emergence of techniques like PoisonGPT. Researchers from Mithril Security have modified the open-source AI model GPT-J-6B to spread disinformation, introducing the concept of LLM supply chain poisoning.

By uploading the manipulated model under a name that impersonates a reputable company, cybercriminals can integrate it into various applications, leading to the dissemination of malicious content. This highlights the need for stringent security measures and vigilance throughout the AI model development and distribution processes.


CLICK HERE TO READ MORE ON WEBTHAT NEWS

Related posts
AI News

Amazon's Investment in Anthropic AI Startup

3 Mins read
AI News

AI Products: Are We Ready for the Onslaught of New Products?

2 Mins read
AI News

Huawei AI Odyssey: Investing in Artificial Intelligence

3 Mins read
Connect and Engage

Stay in the loop and engage with us through our newsletter. Get the latest updates, insights, and exclusive content delivered straight to your inbox.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

×
Startup News

Asia Startup Funding Witnesses a 50% Decline in H1 2023 as Late-Stage Investments Continue to Shrink