ADVERTISEMENT

Relax, Google, the Robot Army Isn’t Here Yet

Google’s panicking about the rise of AI. 

Relax, Google, the Robot Army Isn’t Here Yet
An attendee holds a shopping bag during the Google I/O Developers Conference in Mountain View, California, U.S. (Photographer: David Paul Morris/Bloomberg)

(Bloomberg Opinion) -- People can differ on their perceptions of "evil." People can also change their minds. Still, it's hard to wrap one's head around how Google, famous for its "don't be evil" company motto, dealt with a small Defense Department contract involving artificial intelligence.

Facing a backlash from employees, including an open letter insisting the company "should not be in the business of war," Google in April grandly defended involvement in a project "intended to save lives and save people from having to do highly tedious work."

Less than two months later, chief executive officer Sundar Pichai announced that the contract would not be renewed, writing equally grandly that Google would shun AI applications for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

To the surprise of exactly nobody familiar with Silicon Valley's flexible ethics, he was quick to add that Google "will continue our work with governments and the military in many other areas" including cybersecurity, training and military recruitment. Because we all know that military training has nothing whatsoever to do with facilitating injuries to people.

Google's moral posturing aside, the brouhaha over Project Maven does raise a whole lot of important questions over what defense, national security and law-enforcement applications of artificial intelligence will mean for humanity in the near and distant futures. So I decided to pose some of them to somebody who's been giving the whole thing deep thought: Paul Scharre, author of a new book, "Army of None: Autonomous Weapons and the Future of War."

Scharre, a former Army ranger who was deployed to Afghanistan and Iraq, is now the director of the technology and national security program at the Center for a New American Security, a Washington think-tank founded by some heavy hitters from the Obama administration's Defense and State Departments. Here is a lightly edited transcript of our discussion:

Tobin Harshaw: Let's start with the specific then move to the general. Many people know that Google decided not to renew its contract with the Pentagon on Project Maven. Very few people probably know what Project Maven is. Can you briefly describe it, and explain how AI -- machine learning -- factors into it?

Paul Scharre: The essence is using artificial intelligence to better process drone imagery so the people can understand it. In the public imagination, drones are often synonymous with drone strikes. For the military, the real value that drones bring to the table is their ability to do persistent surveillance. Most of the time they're doing reconnaissance missions -- just watching -- and they're following people and mapping terrorist networks and scooping up volumes of data that are very hard for humans to process.

TH: So how is AI better than we are?

PS: The real burden for the military in operating drones is not the pilots or the cost of the drones themselves, it's all of the people that it takes to monitor this deluge of data that's coming off the drones, and process it and analyze it. So for several years the military has been very interested in what they refer to as automated processing, exploitation and dissemination of information, or automated PED.

AI technology can do that now, because of advances in machine learning. Using large data inputs and neural networks we can train AI systems to do things like identify images.  They've been able to beat humans at benchmark tests for image recognition. So it's sort of a no-brainer in terms of an application of AI tools. It's actually using tools that are fairly well understood across the AI industry and are used in a variety of applications -- from medical imagery to self-driving cars -- and applying them to processing data coming off of drones as well.

TH: Do you have strong feelings -- agree or disagree -- with Google's final decision?

PS: I was really disappointed by the decision not to renew the contract. Google also released some principles for AI and setting limits on what they might do when it comes to weapons and I think the principles make a lot of sense. But Maven is well within the bounds things that they could work on. It's not involving weapons or directly tied to the use of force.

It's really vital that the military uses AI technology to better understand the battle space. It's vital for catching terrorists and for doing things like avoiding civilian casualties. The more that we understand the battlefield, and can correctly identify people and objects, the more that when the military does have to use force, it can be more discriminating.

TH: OK, here is the fun one, some science fiction: What are one or two of the most cutting-edge autonomous weapons systems either in use by the military already or on the cusp of being put into service?

PS: There's a whole lot of super-interesting ones. We're seeing real advances in robotic vehicles being fielded in a variety of different settings. Russia recently deployed into Syria the Uran-9, a large, robotic ground combat vehicle that has a heavy machine gun and rockets. It still has humans in control of it, but there are likely some elements of autonomy as well.

The U.S. Navy has put out to sea the Sea Hunter, which is an entirely uninhabited ship that's robotic. There are humans controlling some of its functions from offboard, but also it will have autonomous functions.

TH: What's on the drawing board?

PS: I think one of the really fascinating projects that I cover in my book is Darpa's Collaborative Operations in Denied Environments, or CODE. It involves designing a fleet of air vehicles that can work together cooperatively like how wolves hunt in packs. The idea is to move beyond simply individual systems that have some autonomy, and moving toward a group or swarm of systems working together. A human directs the swarm to conduct the task, but the vehicles themselves autonomously cooperate on how to perform that task and accomplish that mission.

TH: One of the great fears of many people in terms of military AI is that it will take the human "out of the loop" in a potentially fatal decision. Do you think that removing humans, with all our failings and biases, is necessarily a bad thing?

PS: This issue is usually portrayed in a really binary way of humans or automation. But humans and automation both have advantages and weaknesses, so the best systems are those that merge them together and leverage their strengths. I'm very encouraged by what we've heard of the Defense Department for the past several years -- that their interest is really toward building what they refer to as "centaur" systems, like the mythical half-man, half-horse creature. These would combine humans and artificial intelligence into joint cognitive systems that have some human decision-making and some automation and then merging those together.

TH: Let me bring up one concrete hypothetical: Let's say we develop a drone technology in which humans would tell the drone where to go for a military strike, but the drone itself actually makes the life-or-death decision as to whether to fire a missile. Is that a direction the military is headed in?

PS: There's no question that technology is taking us right up to that point where that will be technologically possible -- it already is in some narrow cases today. Militaries need to confront that question. I guess my inclination is that humans add lot of value to these situations even if the machine can correctly identify an object, like what type of jet or ship or radar it is. Humans can add context -- are we at war? Is there a fear of heightened tensions? -- and this is super valuable in military operations.  

TH: I was at a forum on AI forum the other day at the Brookings Institution led by General John Allen, former head of the anti-ISIS coalition. One of the big concerns of the panelists was that autonomous weapons will be uniquely vulnerable to hacking. Do you agree that this is the Achilles heel, and what can be done about it?

PS: There's no evidence to suggest that autonomous weapons would be more vulnerable to hacking than other types of military systems. Anything that is digital is hackable, and having a human in the loop doesn't that change that. Certainly if somebody hacked a Joint Strike Fighter or any modern military system, there's a lot of ways they could cause damage.

One of the things that's really different with autonomous systems is that autonomy allows you to embed functionality in the machine itself. So if someone were to hack in and take control of the system, they could potentially to do much more damage. For example, in the automobile world, people have been able to demonstrate how to hack into cars and disable the brakes, disable the steering, make the car run off the road. If you have an autonomous car, you'll be able to completely take control of the vehicle, to program in a new destination and then it drives itself. Or maybe an entire fleet of vehicles, and simply redirect all of the autonomous vehicles in a metropolitan area.

I think it's a major concern and it's a really compelling reason for militaries themselves to be skeptical of the technology and to want to put checks and balances in place -- human safeguards or human circuit breakers to limit the damage that could occur if the system is hacked or simply just goes awry.  

TH: There have been some fledgling efforts to forge a worldwide ban on autonomous weapons, including at last year's UN Convention on Certain Conventional Weapons been mostly NGO efforts so far?

PS: There are roughly 20 or so countries right now that have said they support a ban. But none of them are major military powers or major in robotics. The political momentum just doesn't seem to exist right now.

The CCW is consensus-based. It requires every single country to sign up to whatever they're going to do. It makes it a very ineffective forum in many ways for getting things done. It's quite good for having conversations, because all of the key players are at the table. But it's very hard to pass a resolution, much less a treaty, when every single country has effectively veto power.

TH: Last question: This debate is pretty abstract for those of us who have never been in uniform. You're a former Army Ranger and have seen combat. Did writing this book change your views on the upsides and downsides of the future "Army of None"?

PS: One of things I was really struck by was when I researched the history of attempts to regulate new weapons. Just what a mixed bag it is. I try very hard in the book to walk through the history of attempts dating back to ancient India and look at some common threads. And I think one of the things that really stuck out for me was just how hard this is in general to regulate new weapons. And it looks particularly challenging for autonomous weapons because of some of their features being embedded in software, being very difficult to verify, and the technology being quite a ubiquitous internationally.

Having said that, I don't think it's entirely hopeless. I closed the book with some examples of perhaps some narrower bands of regulation that might potentially have a higher chance of success.

To contact the editor responsible for this story: Philip Gray at philipgray@bloomberg.net

©2018 Bloomberg L.P.