Friday, February 20, 2015

Self-Driving-Ambulance Chasers

Two robots walk into a bar...

It won't be long. Or maybe the bartender is a robot. With a breathalyzer built into his nose, which is cute but not so human that he triggers your uncanny valley aversion response. He'll call the self-driving Uber for you if he sniffs more than .08.


Meg and I want to buy a new electric car. She wants to wait to get one that's self-driving. Me too, but I think that's going to have to be in the next round, maybe five to ten years down the road (so to speak), perhaps much longer. I think the technology will get there before the legal liability questions are sorted out. Nothing like the uncertainty raised by a few big class action suits to slow you down if you're a pioneer. 

I'm a lawyer by training, but living in Silicon Valley and having a son who is a programmer have rubbed off on me. I'm much more interested in the technology than the legal questions. The technology is futuristic and fun, and evolving quickly, whereas the law is (sometimes rightly) stuck in the past and evolves at about the rate fish sprouted legs. Still, we need to know who is on the hook when that self driving car runs over someone. We need some new rules of the road. 

I was invited to a lecture at Stanford on this very subject, so I went. It was interesting. It also reminded me why so many of us are not partial to lawyers. Most of us just want to plunge ahead. Lawyers are cautious. Their favorite question is "what if." They can scare you out of doing almost anything. "What if the personal robot that you just sent out for coffee sees what it mistakenly thinks is a robbery in progress and grabs someone and wont let him go? Is the robot your agent, and are you guilty of kidnapping?" That was a question the professor asked me yesterday. Seriously.

He had just finished a thoughtful lecture that applied traditional legal theories to the autonomous activities of robots. He called them APs, for Autonomous Persons. In the law, apparently everyone, even corporations and robots, have to be persons. This is of course because the law was developed for persons and, as to court-developed law, which is much of it, is bound by precedent. Not too many robot precedents yet, so we look to how the law treats people who misbehave and try to apply those rules to robots.

Was the robot a mere tool (think a simple algorithm that performs one function), or did it have enough capacity and independence to be classified as an agent? And when it did the deed that brought it to court, was it acting on behalf of its principal (owner), or had it wandered off the reservation of its job description? What did it intend? What did it's owner intend?

It doesn't take much imagination to see that you can wade pretty deep into the metaphorical weeds chasing these kinds of questions. Right on up to "I'm sorry, Dave, I'm afraid I can't do that."

I asked the professor why not, at least in the initial stages of big autonomous applications, like self-driving cars, which will certainly result in accidents and damage, apply strict liability. This would mean that if you manufacture an autonomous car and it runs over somebody, you are on the hook. No ifs ands or buts. No questions about who sold you the software you used in the car, no questions about whether you were careful enough when you decided to use it, no questions about what you could reasonably have foreseen (all of which come into play in garden-variety product liability cases), just pay up. You would of course buy insurance to cover this risk. The insurance company would cover you as long as your safety record was acceptable, but if you got careless, it would drop you and you'd convert your assembly lines to making toasters.

The professor said this would change the questions raised from ones of law to ones of economics. Good, I thought. But I don't think he thought that would be good. In his defense, he's a computer scientist, not an economist, and he was lecturing on legal theories at the law school.

I'm not an economist either, but it seems to me this would be a sensible way to allocate the risks of exploring this brave new world. It was pooling of risk, which is the essence of insurance, that made the Dutch East India Company a success. Ships were getting lost on the long journey to trade with Asia. If your ship made it, great. If not, you were wiped out. So the shipping companies banded together. After that, each shipper bore only its share of the risk of one of the fleet going down. They and their British counterparts prospered and opened up not only trade but the world. (Also, abhorrent monopoly abuses and, in the case of the English, slave trade, but that's for another essay.)

The law is elegant and sometimes almost magical. It strives gallantly to equitably allocate responsibility for the way we conduct ourselves. But it is slow, slow slow. Cases take years to come to fruition, and even then they may decide only a small part of the legal question. One piece of the puzzle. It can take decades for the picture in the mosaic to come into focus.

The law, by its nature, follows. Technology, by its nature, leads. I suggest we let the risk-pooling model of the East India Company set the sails of technology explorers. The law will catch up eventually. That will be a good thing. But it doesn't seem to me to be a good idea to put lawyers at the helm of the ship that sets out to discover the new world.

1 comment:

  1. NIce thoughts. I have to wonder if modern conveniences and safety always equal a higher quality of life. Your last comment about the law eventually catching up with technology isn't always true, or maybe it's just semantics. Today, the law has a tendency to hold technology back until it understands. But maybe the end result is the same.

    ReplyDelete