If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad to the public figure's Facebook and Instagram profile pictures. If we confirm a match and determine the ad is a scam, we'll block it. We immediately delete any facial data generated from ads for this one-time comparison, regardless of whether our system finds a match, and we don't use it for any other purpose.
Back in 2021, Meta shut down its face recognition processes on Facebook entirely, amid a broader shift in policy designed to distance the platform from the controversies of its past. Facial recognition tools are already being used for questionable purposes, including identifying people entering sports stadiums, and matching people's criminal or credit history in real time.
The user will upload a video selfie, and we'll use facial recognition technology to compare the selfie to the profile pictures on the account they're trying to access. This is similar to identity verification tools you might already use to unlock your phone or access other apps.
As soon as someone uploads a video selfie, it will be encrypted and stored securely. It will never be visible on their profile, to friends or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison regardless of whether there's a match or not.
Collection
[
|
...
]