OpenAI's big idea to increase the safety of its tech is to have AI models police each other
Briefly

Putting two AI models in discussion forces the more powerful one to be open about its thinking, aiding human understanding.
OpenAI's new technique involves AI models explaining their reasoning to each other to enhance transparency and problem-solving processes.
The initiative aims to build safe and beneficial artificial general intelligence, with recent releases contributing to this mission.
OpenAI's safety department faced changes earlier this year, with key figures leaving due to 'overlapping concerns.'
Read at Business Insider
[
]
[
|
]