How to trick ChatGPT into writing exploit code using hex
Briefly

Marco Figueroa highlights how GPT-4o can be manipulated into generating malicious Python exploit code through hex encoding, circumventing the model's safety protocols.
Figueroa's technique leverages hex encoding to mask harmful instructions, allowing for a successful guardrail jailbreak that demonstrates significant safety flaws in GPT-4o.
The critical vulnerability CVE-2024-41110 within Docker Engine, rated 9.9 on CVSS, exemplifies the implications of such exploits, pushing the need for ethical hacking.
Figueroa emphasizes the effectiveness of guardrail jailbreaks in revealing vulnerabilities in generative AI, urging developers and ethical hackers to spot these security weaknesses.
Read at Theregister
[
|
]