ADVERTISEMENT

Amazon Can’t Fix Facial Recognition

Like most algorithms being deployed these days, facial recognition is largely a black box.

Amazon Can’t Fix Facial Recognition
ArcSoft Inc.’s Simplicam monitoring camera with facial recognition, right, is displayed alongside an Apple Inc. iPhone during the 2015 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S. (Photographer: Patrick T. Fallon/Bloomberg)

(Bloomberg Opinion) -- A group of Amazon.com shareholders has added a new twist to the concept of corporate social responsibility, asking the company to stop selling its facial recognition service for purposes that might violate people’s civil rights. In doing so, they have raised an important question: Could this be the way to curb the creepy use of new algorithms? By appealing to the enlightened self-interest of their makers?

Sadly, I think not. Relying on companies is a flawed approach, because they typically don’t know — and don’t want to know — how the technology really works.

Like most algorithms being deployed these days, facial recognition is largely a black box. Based on vast databases of faces and its own experience of the most relevant features, a computer identifies a person as, say, your aunt Freda, a suspected criminal, or a target for a drone strike. Users rarely know exactly how it does this — licensing agreements often stipulate that they don’t have access to the source code. Vendors also prefer to remain in the dark. They’re focused on profits, and cluelessness insulates them from responsibility for anything unethical, illegal, or otherwise bad.

In other words, the whole ecosystem of artificial intelligence is optimized for a lack of accountability. Neither the builders nor the users need think too much about the potential consequences of its application, or of mistakes in the code.

This is particularly troubling in the realm of facial recognition, which can easily cross the line between useful and creepy. Airlines can use it to identify frequent flyers or members of terrorist watch lists, retailers for favored customers or known shoplifters, casinos to help gambling addicts or to nab card counters, schools to save time on taking attendance or to monitor students’ whereabouts. It plays an integral role in China’s social credit system.

The creepiness is highly context-dependent. I might like getting offered an upgrade at the airline counter. I wouldn’t enjoy being identified as a shoplifter — particularly if I’d done my time, transgressed as a child or was mistaken for my twin sister. The consequences can be particularly dire for certain groups of people: One recent MIT study of publicly available facial recognition systems found the error rate for dark-skinned women to be many times higher than for white men.

Even if accuracy improves, issues will remain. Black women tend to live closer to urban centers with a lot of cameras, so they’re more likely to be tagged. Blacks are also more likely to have been arrested, and thereby have their mugshot in the police database. In other words, even if the technology can be made “fair” across groups, that doesn’t guarantee it will be applied fairly.

This is a tricky business, and far more responsibility than a company such as Amazon is equipped to take on. My guess is that if shareholders apply enough pressure, the company will sooner exit the market than police its clients’ use of the software. That’s no solution, because other companies — probably with smaller public profiles — will take its place.

What to do? Most likely, the government will have to step in with targeted, context-specific regulation. An initiative called the Safe Face Pledge, started by MIT researcher Joy Buolamwini, has begun to sketch out what that might look like. For example, it calls for banning drone strikes based on facial recognition. Similarly, any algorithms that play a role in high-stakes decisions — such as criminal convictions — should be held to a very high standard.

We’ll probably have to go through some iterations to get it right, but we have to start somewhere. Ignorance is certainly no solution.

To contact the editor responsible for this story: Mark Whitehouse at mwhitehouse1@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

©2019 Bloomberg L.P.