OpenAI's ex-policy lead criticizes the company for 'rewriting' its AI safety history | TechCrunch
Briefly

Miles Brundage, a former OpenAI policy researcher, has publicly criticized the organization for allegedly "rewriting history" surrounding its approach to deploying potentially risky AI systems. In a recent document, OpenAI outlined its philosophy of continuous learning and iteration in AI deployment, yet Brundage contends that their cautious release strategy for GPT-2 was actually a strong precursor to this current philosophy. He emphasizes that the incremental release of GPT-2 aligned with the lessons of safety that OpenAI now emphasizes for future AI systems.
"OpenAI's release of GPT-2, which I was involved in, was 100% consistent [with and] foreshadowed OpenAI's current philosophy of iterative deployment."
"In a discontinuous world [...] safety lessons come from treating the systems of today with outsized caution relative to their apparent power, [which] is the approach we took for [our AI model] GPT‑2."
Read at TechCrunch
[
|
]