Why Facebook Is Shutting Down Its Face-Recognition System The process will take place gradually over the coming weeks.
Meta, the newly christened corporate parent of Facebook and its many subsidiaries, announced on Tuesday that it would shut down Facebook's decade-old facial recognition software. People who opted in will no longer be automatically recognized in photos and videos, and the platform will delete more than a billion facial-recognition templates.
According to an announcement from Jerome Pesenti, the company's VP of artificial intelligence, this will eliminate the option for users to be automatically notified when they appear in photos or videos posted by others, or receive recommendations for who to tag in photos. The change will also impact Automatic Alt Text (AAT), which creates image descriptions for visually impaired people. AAT will still recognize how many people are in a photo, but will no longer identify them.
Related: Facebook Officially Changes Its Name to Meta
However, there are a number of personal-use cases where facial-recognition technology will still be used. "Looking ahead, we still see facial-recognition technology as a powerful tool, for example, for people needing to verify their identity or to prevent fraud and impersonation," Pesenti wrote. "We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts."
According to the post, the change is "a company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication." Pesenti wrote that amid ongoing uncertainty regarding clear regulation and privacy concerns, the company determined that limiting its use of facial recognition was the appropriate next step.
But we had to weigh that against growing public concerns about facial recognition as a whole. Regulators are also still working to create clear rules governing use of the technology. So we're going to limit our use of it to a narrow set of cases.
— Jerome Pesenti (@an_open_mind) November 2, 2021
The announcement comes approximately a month after Facebook whistleblower Frances Haugen brought internal documents to the Wall Street Journal and Congress, revealing the platform's knowledge of its harmful effects. Facebook CEO Mark Zuckerberg later fired back, saying the accusation that the company deliberately pushes content that makes people angry for profit is "deeply illogical."
Related: Mark Zuckerberg Fires Back at Facebook Whistleblower's Claims: 'Deeply Illogical'