ADVERTISEMENT

What Google's AI Principles Left Out

What Google's AI Principles Left Out

(Bloomberg) -- We're in a golden age for hollow corporate statements sold as high-minded ethical treatises.

Last year, in the thick of its fake news scandal, Facebook released a 5,000-word document outlining, well, I'm still not sure exactly. The letter attempted to pull the company out of its public opinion black hole by posing probing questions, including the head-scratcher: "How do we help people build supportive communities that strengthen traditional institutions in a world where membership in these institutions is declining?" The answers were generally of the "build more Facebook" variety. It was a masterstroke in corporate pablum, though not so masterful that it saved the company from the onslaught of bad press.

Now, Google seems to be taking a page from the book of its Silicon Valley-rival.

Facing a revolt from some of its employees over a contract with the U.S. military, Google has released a lengthy set of principles regarding how it will ethically implement artificial intelligence. The document is a clear attempt to balance continuing to land government contracts, while squashing its staff rebellion. If the company truly planned to restrict or alter its behavior, Amazon, which hasn't seen the same open rebellion, would happily grab those government contracts.

But on the point of whether Google should use artificial intelligence to help the U.S. military kill people, the company is clear: Google will not pursue applications "whose principle purpose or implementation is to cause or directly facilitate injury to people." Not everyone (read: Michael Bloomberg) agrees with the company’s decision to abandon its work on the military's drone program, but at least on this point Google is explicit.

The rest of the company’s "principles” are peppered with lawyerly hedging and vague commitments. Parse this sentence with me: "As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." In other words, Google is going to try to add up the good and the bad that might come from AI software, and act accordingly. The company has discovered utilitarianism. This is the level of sophistication of my high school philosophy class.

It doesn't get much better. Headers include, "avoid creating or reinforcing unfair bias," "be built and tested for safety," and "be accountable to people." The principles Google is committing to could generously be considered table stakes. Even when it comes to surveillance, it's not really clear what exactly the company is promising. Google says it won't pursue spying technology “violating internationally accepted norms.” But the Chinese government actively surveils its own citizens and Barack Obama allegedly approved tapping German Chancellor Angela Merkel's phone. Which norms will Google be adhering to exactly?

Google’s document does pledge to address some interesting problems presented by artificial intelligence. Machine learning algorithms can produce answers that are sound on a statistical level but that can't be explained. That can lead to difficult-to-root-out biases and inscrutable results from the machines that increasingly rule many aspects of our lives. If you're a doctor trying to tell a patient why the computer thinks they have a disease, it's nice to know why it thinks so. Google writes that it will "design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal." That is an admirable goal, though what feedback will be deemed “appropriate” is still unclear.

A crucial question will be who decides if Google has fulfilled its commitments. Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation, told Gizmodo that he thought Google should commit to an independent review process. It’s a proposal that makes a lot of sense. As it becomes more important, big tech needs to head in the direction of greater accountability. A few weeks ago, I wrote about Amazon Alexa. I made the point that if the government wants to build a new road by your house, it would hold a public hearing. If Amazon wants to turn your speaker into a listening device, you don't have any say as to how that's implemented. Who will be the first technology company to bring in an independent commission of philosophers to make sure it’s upholding its ethical commitments?

Without promising independent oversight, Google is just putting a new, less persuasive, spin on an old principle it’s tried to bury: Don't be evil.

To contact the editor responsible for this story: Anne Vandermey at avandermey@bloomberg.net

©2018 Bloomberg L.P.