Facing Facts On Facial Recognition
Clearview AI’s multi-million pound fine by the British Information Commissioner's Office (ICO) for breaching privacy should mark a turning point for a public debate about surveillance levels.
Information can be gold in the modern business environment. But when Clearview AI built a database of 20 billion faces, it fell foul of regulations, resulting in a £7.5 million fine by the ICO this week.
The controversial facial recognition company had previously worked with the Metropolitan Police and Britain's National Crime Agency but, after a thorough investigation, the ICO found the controversial facial recognition company had broken the UK’s data protection law. Breaches included failing to use data in a way that is fair and transparent, because UK residents’ images were collected without their knowledge or consent. As well as the substantial fine, Clearview has been ordered to delete images of UK residents and cease collecting more in the future.
It’s not the first time Clearview AI has broken data protection laws. Similar orders and fines have been issued recently in Australia, France and Italy, plus it settled a case in the US under Illinois' Biometric Information Privacy Act, where it agreed not to sell its database to the majority of private corporations.
It remains to be seen whether the company will pay the fine or comply with the order to delete UK residents’ images. But the ruling does highlight the creeping scope of surveillance technology, which campaigners argue threatens our fundamental human right to privacy in new, alarming ways.
Surveillance Sold As Security
At Amazon’s annual device launch in September 2021, the company unveiled its latest security robot called Astro, described as a “roving Alexa with a camera”, which will scan the faces of people in your home, supported by a security drone circling your house. It’s no surprise from the tech giant that brought us the Ring doorbell. In the eight years since it was released, Ring has evolved into a global CCTV network with the stated object of providing homeowners with a convenient way to answer the door. It has also provided police with a new way to fight crime.
By 2019, the UK’s Police National Database held images of around 20 million faces, a large number of which were of people who had never been charged or convicted of an offence. Errors are commonplace. When South Wales police tested its facial recognition system for 55 hours, 95% of its positive matches were false positives.
So, while such technology might provide a convenient way for homeowners to let the postman know where to leave a parcel if they’re not home, there are more sinister ramifications of facial recognition software linked to large databases of personal information.
The Pandemic Aftermath
Covid-19 turbocharged the growth of surveillance technology. In France, facial recognition technology was used on public transport to monitor whether passengers were wearing masks. Australia trialled similar software to check people were at home during quarantine. Billions of people around the world have had their movements logged by various Covid-19 test and trace apps.
There has been some public support for these sorts of measures. Almost two thirds (61%) of Brits said they were happy to share their health status data during the pandemic. Another 54% were prepared to sacrifice some of their data privacy to shorten the length of lockdown.
But surveillance has slipped into other areas of our lives too. Workplace technology, from monitoring of emails and web browsing, to video tracking and key logging, has become commonplace with the rise of remote working. According to trade union body the TUC, surveillance technology in the workplace is “spiralling out of control”.
Some 60% of workers say they are now subject to some form of monitoring by their employer, and three in 10 agree it’s increased since before the pandemic.
We Need A National Debate
The ICO was able to fine Clearview AI under the UK General Data Protection Regulation (UK GDPR), which protects UK residents’ data privacy rights. But a full national debate is needed around biometric technology and the wider repercussions for a world where this surveillance becomes normalised.
Just because this technology exists doesn’t mean it should be applied in all situations, regardless of how ‘convenient’ it claims to be.
The ICO stepped in last year, for example, when nine Scottish schools introduced facial recognition software to speed up the lunch queue. The regulator encouraged the headteachers to think of another solution that was “less intrusive” and more proportionate.
In its latest decision, John Edwards, the UK’s information commissioner called Clearview’s business model unacceptable. “The company not only enables identification of [people all over the world] but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect the people in the UK by both fining the company and issuing an enforcement notice.” It’s the right call, of course. But let’s not stop there.
Nigel Jones is ex-Head of Legal at Google EMEA and co-founder of Privacy Compliance Hub, who aim to make data privacy compliance easy for all organisations to understand and commit to.
Take your free 10-minute GDPR health check here.
You Might Also Read:
EU & US Agree New Data Rules To Replace Privacy Shield: