ADVERTISEMENT

Silicon Valley Must Consider Tech Ethics, DeepMind Chief Says

Silicon Valley must put ethical considerations at the forefront when they develop products, Deepmind co-founder says

Silicon Valley Must Consider Tech Ethics, DeepMind Chief Says
Mustafa Suleyman, co-founder and head of artificial intelligence (AI) at DeepMind Technologies Ltd., gestures while speaking during Bloomberg’s Sooner Than You Think technology conference in Paris, France. (Photographer: Marlene Awaad/Bloomberg)

(Bloomberg) -- Big technology companies must rethink the way they develop products and services to put ethical considerations in the forefront, DeepMind co-founder Mustafa Suleyman said.

"Silicon Valley has to change its culture from tossing half-baked things over the fence," Suleyman, who leads DeepMind’s applied artificial intelligence division, said while speaking with Caroline Hyde at Bloomberg’s Sooner Than You Think technology conference in Paris.

Silicon Valley Must Consider Tech Ethics, DeepMind Chief Says

Companies should more thoroughly test their products to understand how they could be used and misused, and keep safety in mind before releasing new technology, he said. Government also has a role to play in creating sandboxes -- contained, safe areas or data sets -- on which artificial intelligence algorithms could be tested, he said.

Amid the backlash against technology companies in the past year over data privacy, election manipulation and fake news, Suleyman said that Silicon Valley would begin to change its ways. "I think there has been a shock to the system," he said. "I think a lot of well-intentioned people are waking up to the fact that they’ve been operating in a bubble for a little while."

London-based DeepMind, which is owned by Alphabet Inc., is best known for creating software capable of beating the world’s best players at the ancient strategy game Go.

Suleyman said that when Alphabet purchased his company in 2014, DeepMind received assurances that its technology would never be used for weapons or surveillance systems. "That is the position we continue to hold," he said.

Google, DeepMind’s sister company, has recently reversed a policy of not working on military applications and has begun helping the U.S. Department of Defense analyze drone imagery. The move has been controversial, with thousands of Google employees signing a petition objecting to the move and some employees resigning in protest.

Suleyman also criticized Elon Musk, who heads Tesla Inc. and Space Exploration Technologies Corp. and was an early investor in DeepMind. Musk has become a leading Cassandra on the potential existential risks from artificial intelligence, warning that if not carefully designed such systems could end up obliterating humanity.

"Elon’s predictions are far-fetched to say the least," Suleyman said. He said the tech billionaire’s scare-mongering was doing a disservice by distracting people from much more mundane dangers of today’s existing AI systems, such as mapping systems that try to help people avoid traffic congestion but which wind up increasing usage of roads that can’t handle the increased traffic flow.

--With assistance from Caroline Hyde.

To contact the reporter on this story: Jeremy Kahn in London at jkahn21@bloomberg.net

To contact the editors responsible for this story: Giles Turner at gturner35@bloomberg.net, Molly Schuetz, Robin Ajello

©2018 Bloomberg L.P.