Science fiction writers have been following – or deliberately rejecting – Isaac Asimov’s Three Laws of Robotics since he first wrote about them in the 1940s. But now that we live in a world where robots are vacuuming our floors – not to mention operating on our bodies – lawyers are getting into the act
Law Professor Ryan Calo is working on a major paper (still in draft stage) arguing that — as with the cyberlaw that has accompanied the explosion of the Internet — robotics is going to require new law, in no small part because robots have both software coding and a physical presence and can take unanticipated actions. (A summary of what he’s doing is available here.)
Calo isn’t talking about the potential for artificial intelligence (though it’s easy to argue that Asimov’s robots crossed the line into sentience); he’s strictly dealing with robots that are programmed with variable responses. But he does come to one rather surprising conclusion:
[W]e may be on the cusp of creating a new category of legal subject, halfway between person and res [thing].
Robots may not be sentient, much less human, but Calo thinks we’re going to have to deal with them differently than we deal with the other physical objects in our lives – even differently from the way we deal with computers and the Internet.
He raises a lot of interesting questions. Manufacturers are already using 3-D printers to create small parts for machinery – that is, using design software to create items that were once made by machinists. It won’t be that long before we’re buying the right to download a product straight to our personal printer. Who’s going to be responsible if that product turns out to be harmful?
Right now, manufacturers and product distributors are strictly liable for defective products, meaning that you can sue the company that made your toaster if it sets your house on fire due to a defect in the wiring. But it could be argued that you’re the manufacturer of the toaster if you “built” it yourself by downloading plans to your printer. Of course, you didn’t do the design that caused the defect.
What about drones? If the Amazon drone delivering a package to your house hits the neighbor’s kid playing in your front yard, is that your responsibility or Amazon’s?
Then there are privacy and security issues. The sheer presence of robots changes our conception of something spying on us. We may know our phones are tracking us, but that’s happening out of sight. The police drone flying around our neighborhood is much more tangible.
Calo says people don’t think robots are human – at least, not exactly. But we do have concerns for them. Soldiers have risked their lives to save a robot. People might run into a burning building to save one. (I might run into a burning building to save my computer, but mostly because re-creating everything on it without it would be a hell of a chore.)
Should there be laws prohibiting robot abuse? Would robot abuse be a sign that a person was more likely to abuse people?
Calo convinced me that robots are going to require some legal changes and that while it may be beneficial to build those changes on the cyberlaw we have now, the court decisions and legislation we’ve made about the Internet are not going to be enough to cover all the issues.
However, I’m not sure he’s right when he makes a bright line distinction between robots programmed to do different things depending on circumstances – which is inherent in all of them and perhaps most obvious in self-driving cars – and artificial intelligence. I’m beginning to think we won’t know objects have crossed the line into sentience until after it happens.
I’m not advocating votes for robots just yet, but the technologists I know are starting to convince me that sentient objects are just a matter of time. A lot of good SF has addressed this by dealing with whether it’s legal to shut down a sentient computer, but I haven’t seen much that addresses who’s at fault legally if the AI goes rogue. Is the AI alone responsible for criminal actions?
We certainly need to start figuring out the new rules for robots, but as we do this, we’d better keep in the back of our minds how we’re going to deal with it when our self-driving cars start deciding they’d rather go to Tucson than Minneapolis.