Skip to main content

Generative Pre-trained Transformers, or GPTs, are advanced language models that produce text closely resembling human writing based on prompts they receive. These models undergo extensive pre-training on large datasets of text, enabling them to understand and generate language in a wide range of styles and formats.

People are leveraging GPTs for specific tasks such as composing essays, summarizing articles, aiding in research, and assisting with coding.

Notably, GPTs have also been adapted for cybersecurity-specific tasks. This list highlights a selection of these innovative models, all of which are valuable tools that cybersecurity experts should consider integrating into their daily operations or be aware of.

Hacking GPT

HackerGPT Logo

HackerGPT facilitates ethical hacking and is the ideal partner for ethical hackers. It was taught using a large database of hacking information and various hacking tools.

HackerGPT does more than just quickly answer cybersecurity and ethical hacking issues; it also assists with the use of popular open-source hacking tools such as subfind3r, so you don’t have to memorize the instructions.

You may find HackerGPT at https://chat.hackerai.co. You can also run it locally and it is open source https://github.com/Hacker-GPT/HackerGPT-2.0

Arcanum Cyber Security Bot

Developed by renowned hacker Jason Haddix. He refers to it as “one of the best tools I have ever made in my life.” It was previously known as SecGPT.

ACS Bot functions similarly to a more informed colleague, allowing you to ask questions and converse with him during security testing and assessments.

Hacking APIs GPT

Hacking APIs GPT Logo

This GPT was designed to bolster API security testing. It was put together by Corey J. Ball, the Chief Hacking Officer at APISec University.
Some of its capabilities include: going through OpenAPI/Swagger documentations to uncover vulnerabilities, scanning through endpoints to identify important endpoints for further testing, analyzing and uncovering ways to exploit JWTs, etc.

DarkGPT

DarkGPT running

DarkGPT is an AI-powered Open Source Intelligence (OSINT) tool that leverages the capability of GPT-4-200K to perform queries on leaked databases on the Dark web. This makes gathering information from multiple sources much easier and faster.

DarkGPT is Open-source and needs to be installed locally. It can be found at https://github.com/luijait/DarkGPT.

WormGPT

Slash Next, a cybersecurity company, discovered WormGPT, a tool that uses the GPTJ language model and is made by cybercriminals to help them in their activities. Unlike other tools in the official ChatGPT store, WormGPT doesn’t have any restrictions. It’s powerful enough to launch complex email scams, create harmful software like ransomware, and set up fake websites to trick people. Users can also run it through the TOR browser for extra privacy. WormGPT offers subscriptions at 100 euros a month or 500 euros for a year, providing a range of tools for sophisticated online crimes.

Bonus…

FraudGPT

FraudGPT, much like WormGPT, offers unrestricted capabilities and is available on the dark web. It’s designed for writing harmful code, crafting viruses, spotting security weaknesses, launching phishing attacks, providing hacking guidance, and locating illegal marketplaces on the Dark Web. This tool operates on a subscription model, with prices beginning at $200 a month and reaching up to $1,700 for an annual plan.

In the ever-changing world of cybersecurity, where Attackers vs. Defenders battling, is the norm, GPTs have added a new dimension. Some, like WormGPT and FraudGPT, were created by cybercriminals to speed up their crimes, whereas others were created to help ethical cybersecurity experts.

In the current age of rapid technological innovation, where Artificial Intelligence (AI) optimizes and accelerates activities, cybersecurity workflows must include such capabilities. Defenders must use AI to fight AI, since hostile actors are already using it. AI excels at quickly evaluating large amounts of data, giving defenders new insights and capabilities.

Despite growing AI use, cybersecurity experts must be cautious. Sharing sensitive data with AI models is unethical.

About the author: