UK campaigners raise alarm over report of Meta plan to use AI for risk checks
Briefly

Internet safety campaigners have expressed strong concerns regarding the potential automation of risk assessments by Meta, the parent company of Facebook. Ofcom has been urged to ensure that these assessments, crucial for protecting users under the UK’s Online Safety Act, maintain human oversight. Organizations like the NSPCC and Molly Rose Foundation warn that reliance on AI for these assessments could undermine the effectiveness of safeguarding measures intended to protect children from harm online. Ofcom is currently reviewing these concerns and has yet to make a conclusive decision.
"We urge you to publicly assert that risk assessments will not normally be considered as suitable and sufficient, where these have been wholly or predominantly produced through automation."
"Ofcom said it was considering the concerns raised in this letter and will respond in due course, emphasizing the need for proper oversight in risk assessments."
Read at www.theguardian.com
[
|
]