
"The prompts target five categories of harm that AI systems can facilitate for younger users: graphic violence and sexual content, harmful body ideals and behaviours, dangerous activities and challenges, romantic or violent role play, and age-restricted goods and services."
"Robbie Torney, head of AI and digital assessments at Common Sense Media, said the prompt-based approach is designed to establish a baseline across the developer ecosystem, one that can be adapted and improved over time because the policies are open source."
"OpenAI itself framed the problem in pragmatic terms. Developers, the company wrote in a blog post accompanying the release, often struggle to translate safety goals into precise operational rules."
OpenAI is introducing open-source, prompt-based safety policies aimed at assisting developers in creating safer AI applications for teenagers. These policies address five categories of potential harm, including graphic violence, harmful body ideals, dangerous activities, romantic or violent role play, and age-restricted goods. Developed in collaboration with Common Sense Media and everyone.ai, the policies aim to provide a baseline for safety across the developer ecosystem. OpenAI acknowledges that developers often struggle with translating safety goals into operational rules, leading to inconsistent protection and user experience degradation.
Read at TNW | Artificial-Intelligence
Unable to calculate read time
Collection
[
|
...
]