OpenAI launched a safety fellowship
Briefly

OpenAI launched a safety fellowship
"The OpenAI Safety Fellowship will fund a cohort of external researchers to conduct independent work on AI safety and alignment, running from 14 September 2026 to 5 February 2027."
"Fellows will receive a monthly stipend, computing resources, and mentorship from OpenAI researchers, and are expected to produce a significant research output by the programme's end."
"Priority research areas include safety evaluation, robustness, scalable mitigation strategies, privacy-preserving methods, agentic oversight, and high-severity misuse domains."
"The announcement was made shortly after a New Yorker investigation reported that OpenAI had dissolved its superalignment and AGI-readiness teams, raising concerns about its focus on safety."
The OpenAI Safety Fellowship is a pilot program for external researchers to work on AI safety and alignment from September 2026 to February 2027. It offers a monthly stipend, computing resources, and mentorship. Researchers are expected to produce significant outputs by the end of the program. Applications close on 3 May, with priority areas including safety evaluation and scalable mitigation strategies. The announcement followed a report revealing OpenAI's dissolution of its superalignment and AGI-readiness teams, raising concerns about its commitment to safety.
Read at TNW | Launch
Unable to calculate read time
[
|
]