ADVERTISEMENT

You Might Be a Robot. This Is Not a Joke.

You Might Be a Robot. This Is Not a Joke.

(Bloomberg Opinion) -- There’s a lot of loose talk about convergence between people and robots. As some see it, the rise of artificial intelligence will make humans obsolete — or at least force people to think more like machines.

Whether this prophecy should be viewed as auspicious or menacing is the subject of much speculation and controversy. But there’s no denying that in significant ways, it’s already happening.

As the Stanford University legal scholars Bryan Casey and Mark Lemley put it provocatively in the title of a new paper forthcoming in the Cornell Law Review, “You Might Be a Robot.” They don’t mean that literally, but they’re completely serious. “If you’re reading this you’re (probably) not a robot,” they write, “but certain laws might already treat you as one.”

The problem, Casey and Lemley explain, is that defining what it means to be a “robot” run by “artificial intelligence” is fiendishly difficult. It might be easy to identify Optimus Prime when he’s in battle mode, but suppose he transforms into a truck and you hop in the front seat. Are you now the driver? You might be — U.S. laws sometimes define drivers as a function of where people sit, rather than whether they’re actually guiding the car. And that means that you’ll have liability when Prime runs over some Decepticons.

The Transformers example isn’t far from what’s already happening in the world of self-driving cars. There are real questions, say, about whether someone who’s asleep in a fully automated vehicle is responsible if the car runs into a pedestrian.

Lines blur like this all the time. Is a drone piloted by a human a robot for legal purposes? What about a human driver who relies on route guidance from a smartphone?

Even if legal scholars could agree on how to distinguish humans from machines, precise definitions are unlikely to stand up to the pace of technical change. Some states, for example, have laws requiring all cars to have drivers. That presumably made sense when only humans could drive, but now it means that companies have to put humans in the driver’s seats of any self-driving cars they test.

A legal definition of artificial intelligence that perfectly captured the AI field as it exists today would probably be rendered useless by the next technological advance. If laws defined AI as it existed a decade ago, they’d probably have failed to cover today’s neural networks, which are used to teach machines to recognize speech and images, to identify candidate pharmaceuticals, and to play games like Go.

If scholars and lawmakers instead resort to broad definitions meant to cover unanticipated developments, there could be unpleasant unintended consequences. As Casey and Lemley note, “a surprisingly large number of refrigerators” are technically “federal interest computers” under a 1980s cybersecurity statute that didn’t anticipate how widespread computers would become. This sort of over-coverage can strangle innovation, as it forces people developing new technologies to worry about a web of regulations that probably shouldn't actually apply to their work, but nevertheless do.

So what’s the way out? Casey and Lemley suggest looking to one of the earliest scholars of artificial intelligence, Alan Turing, for advice. Turing suggested a functional test: A machine is “intelligent” whenever its behavior is indistinguishable from a human’s.

Similarly, laws could regulate robots based on actions, rather than the way they’re constructed. Rather than trying to figure out just how “robotic” a car is, for example, rules of the road could depend on functional measures like safety.

That’s a variant of the strategy the U.S. Congress took with 2016 Better Online Ticket Sales Act, which fought scalpers’ ticket-buying bots by restricting efforts to get around captchas and other bot-busting protocols on ticketing websites rather than by trying to define what a “scalper bot” actually is.

It’s also important to embrace the difficulty of defining robots, rather than trying to work around it. Whenever possible, laws of robotics should be defined inductively, using case-by-case approaches. Enforcement should be delegated to regulators, who can adapt implementation more flexibly than legislators can.

Otherwise, we might find ourselves stuck with regulations that can’t tell humans from Cylons. Those aren’t the laws we’re looking for.

And what about a supposedly path-breaking robot that's actually just a human in a costume?

To make the point that their self-driving cars are actually fully autonomous, some researchers have dressed up these legally required “drivers” as car seats.

To contact the editor responsible for this story: Jonathan Landman at jlandman4@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Scott Duke Kominers is the MBA Class of 1960 Associate Professor of Business Administration at Harvard Business School, and a faculty affiliate of the Harvard Department of Economics. Previously, he was a junior fellow at the Harvard Society of Fellows and the inaugural research scholar at the Becker Friedman Institute for Research in Economics at the University of Chicago.

©2019 Bloomberg L.P.