Research labs have stopped openly sharing their safety research with the broader community, demanding more transparency from AI companies.
OpenAI is criticized for prioritizing product development over safety, with concerns rising about the lack of safeguards for large AI models.
The letter calls for AI companies to be more transparent about potential harms and safety measures and to refrain from using broad confidentiality agreements.
OpenAI defends its AI systems' safety while acknowledging the current limitations of AI capabilities preventing truly dangerous situations like autonomous actions to shut down power grids.
#ai-safety-practices #transparency-in-ai-companies #concerns-about-large-ai-models #researchers-perspectives #corporate-transparency
Collection
[
|
...
]