AI's use as a hacking tool has been overhyped
Briefly

The offensive potential of large language models (LLMs) like GPT-4 was tested, showing it capable of autonomously writing exploit code for vulnerabilities with an 86.7% success rate.
Researchers highlighted challenges in finding vulnerabilities rather than exploiting them, emphasizing the emergent capability of LLMs like GPT-4 to exploit one-day vulnerabilities.
GPT-4 showcased high capability in exploiting specific vulnerabilities; researchers suggested potential for further enhancement with features like better planning and larger response sizes.
Read at ITPro
[
add
]
[
|
|
]