When tech recognises your face in the crowd

Facial recognition is getting popular but it has serious potential to undermine civil rights

By Sandeep Gopalan

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Published: Sun 26 Jan 2020, 6:00 PM

Last updated: Sun 26 Jan 2020, 8:06 PM

Have I seen him somewhere before? This question, typically asked inwardly as the human brain scans its memory of faces for recognition, is so ubiquitous in modern life that it is a staple of film plots from Bollywood to Hollywood. Facial recognition technology may make the question redundant very soon. And that's a bad thing not just because celluloid heroes are denied the opportunity to chat up heroines. Here's why.
The storage and retrieval of information is extremely inexpensive today thanks to the internet and related technologies. Even the most obscure facts can be searched and retrieved quickly thanks to search engines such as Google. Now imagine that there was a search engine for human faces - something that helps you to take a photo of anyone anywhere and learn more about them. In other words, you don't have to wonder if you've ever seen them somewhere before, or what they do for work, or whether they are employed or married. You can learn their name and other details through a simple search on the web. You could find out where they went to school, who they are married to, where they work, and what their interests and social activities are.
Creepy? You bet!
That's exactly the reality coming soon to a screen near you. The New York Times reported this week that an artificial intelligence company called Clearview is offering technology that allows users to identify people from images scraped from web platforms such as Facebook and LinkedIn. Clearview's system is reportedly accurate about 75 per cent of the time and is being used by law enforcement agencies in the United States to catch criminals. The company's system is robust enough to use camera images from surveillance cameras in public locations for identification purposes. The NYT's report notes that Clearview collected millions of images by scraping platforms containing troves of user images submitted for other purposes - in violation of those platforms' policies and without the consent of individual users.
Clearview is not an aberration. There are other companies using and selling facial recognition technologies to law enforcement and other entities. Whilst they are touted as tools for modern law enforcement designed to keep people safe, facial recognition-based tools present clear dangers. Imagine if a Clearview-type app falls into the hands of criminals - the potential for crimes including robbery, stalking, extortion, etc., increases. Similarly, such technology could be misused by an array of otherwise benign actors including employers, insurance companies, and credit providers. Some employers already scan employees' social media pages and intrude into their personal lives. This could get worse with facial recognition because of the minimisation of anonymity in a variety of daily public environments.
And the technology has the serious potential to undermine civil rights in the hands of the wrong government agencies. If people believe they are constantly under surveillance, the freedoms of speech, expression, and association are chilled. A person attending a peaceful protest against government corruption may receive a visit from the tax authorities or the police as a means of intimidation after being identified through footage from the public square. A journalist speaking to a source in a café may be identified and targeted by the subject of an expose. The list of potential abuses is endless.
The basic guarantee of a free society is the ability to go about one's business unmolested as long as one is not breaking the law. This is underpinned by the expectation that one has the ability to control one's private life. Now that is being threatened by the ever-expanding use of surveillance cameras and the growing sophistication of facial recognition technology.
To be clear, there may be legitimate gains to be had from the deployment of facial recognition in narrowly tailored contexts. For instance, enabling specialised counter-terror agencies to deploy the technology in public spaces that are potential terrorist targets may help identify terrorists before they strike. Similarly, allowing immigration staff at airports to use facial recognition solely to process people expeditiously is efficient. However, absent such clear benefits, routine uses of facial recognition must be banned because of the significant potential for misuse.
The London Metropolitan Police announced this week that they will use live facial recognition to 'keep people safe.' The police claim this is being done in a narrowly targeted way but such pleas have to be taken with skepticism. Privacy concerns and errors in identification of minority populations are already well documented in the UK. Similar concerns have prompted US cities such as San Francisco and Oakland to ban the use of facial recognition.
We need to put the cliched genie back in the bottle before it is too late and the following norms must be established urgently. First, facial recognition must be severely circumscribed and used in narrow areas where there is an overwhelming public interest. The burden is on the user to show this public interest. Second, the images must be immediately destroyed after permitted use. Otherwise, the potential exists, for instance, that rogue airline or customs staff may transfer images and use them for nefarious purposes. To prevent such harms, destruction after use must be the norm.
Third, images must not be combined with other data to gain unrelated benefits for data processors. For example, immigration images must not be combined with shopping data and sold. Fourth, consent must be affirmatively established and is restricted solely to the purpose for which the images are taken. As we see from the Clearview example, little did the user of Twitter or Instagram know that when they uploaded their holiday photos they were submitting data to create a creepy Orwellian database of images. Finally, privacy must be recognised as a basic human right.
Sandeep Gopalan is the Vice Chancellor of Piedmont International University, US


More news from