Meta is transitioning to an AI-based system for conducting privacy reviews of its apps, aiming to speed up the process. Under a 2012 agreement with the FTC, the company has been conducting human-led privacy evaluations. The new method requires product teams to fill out a questionnaire and receive an AI-generated risk assessment. While it could enhance efficiency, critics warn of heightened risks as potential issues might not be identified in advance. Meta states it will automate low-risk evaluations but still employ human judgement for more complex matters.
An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps.
Meta insists that only low-risk decisions will be automated, ensuring that human expertise is still used for examining novel and complex issues.
Collection
[
|
...
]