Facebook Ends Recognition Software
Facebook plans to shut down its decade-old facial recognition system, deleting the face scan data of more than one billion users,saying this decision is in response to criticisms of the software that uses to identify users in photos and videos.
There have been growing concerns about the ethics of facial recognition technology, with questions raised over privacy, racial bias, and accuracy. This action will effectively eliminate a feature that has fueled privacy concerns, government investigations and class-action lawsuits.
The social media network has fallen under increasing legal and regulatory pressure over its use of the software, which automatically identifies users in photos and videos if they have opted in to the feature. Until now, users of the social media app could choose to opt in to the feature which would scan their face in pictures and notify them if someone else on the platform had posted a picture of them.
Jerome Pesenti, vice president of Artificial Intelligence at Meta, Facebook’s new named parent company, said that the social network was making the change because of “many concerns about the place of facial recognition technology in society.” He added that the company still saw the software as a powerful tool, but “every new technology brings with it potential for both benefit and concern, and we want to find the right balance.”
Regulators had not yet provided a clear set of rules over how it should be used, the company said, following a barrage of criticism over its impact on its users.
The move to both stop using the software and to wipe the data that is related to existing users of the feature marks an about-face for Facebook, which has been a major user and proponent of the technology. For years, the social network has allowed people to opt in to a facial-recognition setting that would automatically tag them in pictures and videos. Pesenti wrote that more than a third of the company’s daily active users had opted in to the setting, or more than 643 million people, as Facebook had 1.93 billion daily active users in the third quarter of 2021.
Facial recognition software has been fraught with controversy, as concerns mount about its accuracy and underlying racial bias. For example, the technology has been shown to be less accurate when identifying people of color, and several Black men, at least, have been wrongfully arrested due to the use of facial recognition.
While there’s no national legislation regulating the technology’s use, a growing number of states and cities are passing their own rules to limit or ban its use.
In 2019, a US government study suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared to Caucasian faces. African-American women were even more likely to be misidentified, according to the study conducted by the National Institute of Standards and Technology.
Last year, Facebook also settled a long-running legal dispute about the way it scans and tags photos. The case has been ongoing since 2015, and it was agreed the firm would pay $550m (£421m) to a group of users in Illinois who argued its facial recognition tool was in violation of the state's privacy laws.
Researchers and privacy activists have spent years raising questions about the tech industry’s use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.
This change will represent one of the largest shifts in facial recognition usage in the technology’s history. More than a third of Facebook’s daily active users have opted in to our Face Recognition setting and are able to be recognised, and its removal will result in the deletion of more than a billion people’s individual facial recognition templates.
Other tech firms such as Amazon and Microsoft have both suspended facial recognition product sales to police as the uses for the technology have become more controversial.
Facebook / Meta: APNews: BBC: NYT: Washington Post: NBC: Guardian:
You Might Also Read:
Risks Of Bias In ‘Emotional AI’: