
"The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential "catastrophic risks," whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats. Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety."
"If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,"
"In a post on X, CEO Sam Altman acknowledged that AI models are "starting to present some real challenges," including the "potential impact of models on mental health," as well as models that are "so good at computer security they are beginning to find critical vulnerabilities.""
OpenAI is hiring a Head of Preparedness to execute a framework for tracking and preparing for frontier AI capabilities that could cause severe harm. The role covers risks across computer security, mental health, biological capabilities, and systems that can self-improve. The company created a preparedness team in 2023 to study both immediate threats like phishing and speculative threats such as nuclear risks. OpenAI has reassigned its prior preparedness head and experienced turnover among safety executives. The company updated its Preparedness Framework and said it might adjust safety requirements if competitors release high-risk models without similar protections.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]