Hackers are using an uncensored chatbot called GhostGPT to facilitate illegal activities such as malware writing. Unlike standard AI tools like ChatGPT, GhostGPT lacks ethical guidelines, making it appealing for cyber criminals. Abnormal Security reported this trend, noting GhostGPT follows other hack-bots like WormGPT and WolfGPT, designed for malicious purposes. It likely connects to a modified version of ChatGPT or an open-source large language model, bypassing safety measures, and is marketed for developing exploits and phishing materials, posing a significant threat to enterprise security.
It likely uses a wrapper to connect to a jailbroken version of ChatGPT or an open source large language model (LLM), effectively removing any ethical safeguards.
By eliminating the ethical and safety restrictions typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by traditional AI systems.
The advertisement noted its various features make GhostGPT 'a valuable tool for cybersecurity and various other applications.'
GhostGPT was marketed for coding, malware creation, and exploit development, but could also be used to write material for business email compromise (BEC) scams.
Collection
[
|
...
]