OpenAI warns of cyber risks posed by new AI models
Briefly

OpenAI warns of cyber risks posed by new AI models
"OpenAI announced Wednesday that it is establishing a new advisory board, the Frontier Risk Council, as part of a broader strategy to better manage the risks surrounding advanced AI models. OpenAI warned that the next generation of models could pose a significantly higher cybersecurity risk, as they appear increasingly capable of identifying complex vulnerabilities and even developing new, working zero-day exploits. According to OpenAI, the systems could also contribute to digital intrusions in corporate or industrial environments, with physical effects in the real world."
"The Frontier Risk Council is to play a central role in mitigating such threats. This advisory group comprises experienced cybersecurity specialists and other experts in digital defense. They will work closely with OpenAI's technical teams to help assess risks, align model capabilities, and develop security frameworks for future releases. The council will start with a focus on cybersecurity, but will eventually be deployed more broadly for other risks arising from increasingly powerful AI models."
"The company is developing models and tools to support security teams with tasks such as code audits and vulnerability remediation. This defensive approach is complemented by stricter access control, infrastructure hardening, restrictions on outgoing data traffic, and more extensive monitoring to prevent misuse. In addition, OpenAI is preparing a program that will give certain users and organizations involved in cyber defense access to more advanced AI capabilities, depending on their role and level of trustworthiness."
OpenAI is establishing the Frontier Risk Council to manage risks from advanced AI models, initially prioritizing cybersecurity threats. The council will include experienced cybersecurity specialists and digital defense experts who will collaborate with technical teams to assess risks, align model capabilities, and design security frameworks for future releases. OpenAI is investing in defensive AI functionality to support security teams with code audits and vulnerability remediation. The company is tightening access controls, hardening infrastructure, restricting outgoing data traffic, and increasing monitoring. A trust-based access program will grant advanced capabilities to vetted cyber defense organizations while limiting broader availability.
Read at Techzine Global
Unable to calculate read time
[
|
]