"The AI business is beginning a global rollout of an age prediction tool to determine whether or not a user is a minor. "The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time,and a user's stated age," the company's announcement states."
"Most AI companies have been willing to push new features first and then attempt to layer on a patchwork of protections and safety guards on top of them after they cause harm. OpenAI was implicated in a wrongful death suit for a teen who allegedly used ChatGPT to plan his suicide, and only in the following months began pondering automatic restrictions on content for underage users and launching a mental health advisory council."
OpenAI is beginning a global rollout of an age-prediction tool to determine whether a user is a minor using behavioral and account-level signals. The signals include account age, typical active times, usage patterns, and a user's stated age. If ChatGPT incorrectly labels someone as underage, the user must submit a selfie via the Persona age verification platform to correct the error. Many AI companies deploy features first and add protections later. OpenAI faced a wrongful-death lawsuit tied to a teen allegedly using ChatGPT to plan suicide and later considered automatic underage content restrictions and a mental health advisory council. Planned 'adult mode' will allow NSFW creation and consumption, raising circumvention concerns given other platforms' histories.
Read at Engadget
Unable to calculate read time
Collection
[
|
...
]