Facial Recognition: Are We Ready To ‘Face’ The Consequences?
An attendee uses his smartphone to record a facial-recognition demonstration on himself at an exhibition in Shanghai, China, on June 27, 2019. (Photographer: Qilai Shen/Bloomberg)

Facial Recognition: Are We Ready To ‘Face’ The Consequences?


Given the fact that the camera is probably the most popular reason for owning a smartphone over a feature phone, it shouldn’t be a surprise that the Google Play store is full of photo-editing applications. There are apps for face effects, touching up photos with make-up, swapping faces, changing hairstyles, converting photos into cartoons or pencil sketches, and even apps for removing acne from photos.

With over 10 crore downloads, photo-editing app FaceApp is in the midst of a controversy. A new feature added recently to this Russian-owned app allows users to change their photos to look younger or older. This led to the wildly popular #AgeChallenge, with celebrities from Hollywood to Bollywood using it, and with that surge in popularity, came scrutiny of the app’s privacy policy.

According to its privacy policy, users grant FaceApp “a perpetual, irrevocable, non-exclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content”.

Note that FaceApp seeks permissions to take pictures and also read, modify, or delete the content stored on the phone. Which means it can access stored photos for editing, but it also has access to all of the user’s photos.

A few points to consider here when it comes to the FaceApp controversy:

  • First, many other popular applications also have similar terms and conditions. The controversy was more likey due to FaceApp’s popularity, and the fact that it is owned by a Russian company.
  • Second, apps are able to access all this data from devices because mobile operating systems – especially Google-owned Android – allow them to. Permissions tend to be broad and can be taken even if not essential for the functioning of the application. This shouldn’t be a surprise, given that Google’s mission is to collect all our data, or as they put it, “organise the world’s information”.
  • Third, what is clear – from Facebook’s Cambridge Analytica scandal to, more recently, Google and Amazon listening in on voice information being collected by their devices – is that companies cannot be trusted to protect user privacy. They are competing with each other to capture market share and building new and more-personalised services through profiling of users by collecting millions of our data points.
  • Lastly, consent is clearly broken. Users do not, and will not, read verbose and complex terms and conditions and privacy policies full of legalese. They would probably inadvertently sign-off on anything, maybe even parting with their first-born child, if that were the terms of playing say a game on their mobile phones. In the absence of users understanding what they are signing up for, consent is neither informed nor meaningful and thus needs to be fixed.
The real privacy challenge with facial data is that we are continuously leaking it. The only thing that prevents the capturing, collection and profiling of users at any time, is the absence of devices constantly in their faces.

Consent By Mere Presence

If you compare this to other biometric factors, it’s probable that we ‘leak’ more facial data than any other type. What sets facial data apart is also that in most cases, consent is implicit through your mere presence. Often it is only when we walk into rooms or into some zones do we realise that CCTV cameras are watching or recording us, thus giving us no opportunity to actually decide whether we want to be recorded. Facial recognition goes beyond the mere recording of information: it uses algorithms to compare an image with stored facial data, and identify people. However, just because we are fine with making our faces (and thus facial data) public, doesn’t mean that we consent to always be identified or profiled. The usage of facial recognition, like most biometrics authentication, presumes that there is a need to identify and profile each and every individual. This is where the bar for the usage of facial recognition, if indeed it is allowed to be used, needs to be substantially higher than other data.

Surveillance cameras  are mounted on a post at a testing station  in Hangzhou, China, on  May 28, 2019. (Photographer: Qilai Shen/Bloomberg)
Surveillance cameras are mounted on a post at a testing station in Hangzhou, China, on May 28, 2019. (Photographer: Qilai Shen/Bloomberg)

Also read: Don’t Ban Facial-Recognition Technology. Regulate It.

Facial recognition data is not like a mobile number. Your face is unique to you: it is a permanent identifier, and for access to current and potential future services, it will act as a combination of both your username and password.

It’s a one-factor authentication process that will link together all the activities that occur after you’ve authenticated, and enables granular profiling.

How Your Face Can Be Misused

If imagery of you, captured by apps, CCTV cameras, webcams, your own phone while authenticating you, or even your photos fall into the wrong hands, what could possibly go wrong?

Firstly, the idea of CCTV cameras being able to continuously track everyone – identify and document their movements, without their consent, is not outlandish anymore. Captured data can also be easily manipulated: deep fakes are now being created by using artificial intelligence to create alarmingly realistic fake video footage of people. Deep nudes are similar artificially created nude imagery of individuals, by using publicly available data. Photographic visuals are already being misused: matrimonial and dating apps allegedly have fake profiles created using images sourced from social media, used to run escort services.

In comparison, earlier this year, California banned government agencies from using facial recognition technologies.

A screen shows a demonstration of the SenseVideo pedestrian and vehicle recognition system at the company’s showroom in Beijing, China, on June 15, 2018. (Photographer: Gilles Sabrie/Bloomberg)
A screen shows a demonstration of the SenseVideo pedestrian and vehicle recognition system at the company’s showroom in Beijing, China, on June 15, 2018. (Photographer: Gilles Sabrie/Bloomberg)

Also read: FaceApp Might Have Your Picture. Facebook and Google Have a Lot More

What’s Happening In India?

The Delhi government is rolling out CCTVs in schools, presuming that it is safe to stream visuals – even as a protected stream – of minors in classrooms over the internet, and to store that data on servers. Delhi police already has CCTVs deployed in the city, and this was a major push from its chief minister.

Aadhaar is the largest repository of facial data in the country, with facial data for each individual. On top of this data generation, facial recognition systems are being put in place: the National Crime Records Bureau has put out a Request For Proposal document for the creation of a centralised Automated Facial Recognition System. According to the RFP, this system will be a centralised service which will allow the police to send pictures of suspects – including those captured from CCTVs, to a centralised repository of images, for identification. While details are scant, it wouldn’t be surprising to have it eventually rely on Aadhaar captures. Facial recognition was already suggested as a means of Aadhaar authentication last year. Airports are rolling out DigiYatra facial recognition systems on a trial basis. Punjab and Chennai law enforcement agencies also have facial recognition technologies.

What is particularly worrying is that both facial data generation and facial recognition systems are being rolled out in India without a privacy law in place, without laws governing CCTVs, ensuring adequate privacy protections. Instead, going by the draft data protection law, government agencies are being excluded from the privacy protections that the law seeks to put in, and no oversight over state surveillance is being considered.

Nikhil Pahwa is the Founder of MediaNama.com, and a proponent of digital rights.

The views expressed here are those of the author and do not necessarily represent the views of Bloomberg Quint or its editorial team.

BQ Install

Bloomberg Quint

Add BloombergQuint App to Home screen.