M068402b8978cf01c28fa5e7a66b282e74

MEMBERS


M068402b8978cf01c28fa5e7a66b282e74

ARCHIVES


RECENT ENTRIES

    M268402b8978cf01c28fa5e7a66b282e74

SYNDICATE


MAILING LIST

M368402b8978cf01c28fa5e7a66b282e74

The Serious Side of Robotics

Two unrelated events that occurred this summer involving robotic systems point out the serious side of robotics. The first is the traffic fatality associated with the semi-autonomous self-driving feature of the Tesla. The second event is the use of a bomb-carrying robot to kill a civilian suspected of killing several policemen in Dallas. These two events — both the first of their kind — clearly fall within the purview of what robotics is designed to handle: the dull, dirty, and dangerous.

At the time of this writing, the blame for the driver fatality hasn’t been placed on either the driver or the Tesla, and that isn’t the point. What matters is how the accident foreshadows our increasing dependence on and interaction with automated systems. At some point in the future, fatalities associated with automated trains, planes, and cars will no longer make front line news. While hopefully rare, these events will simply be expected because machines, programmers, and human users aren’t perfect. Even if they were, there’s no avoiding someone intent on causing an incident. As far as I know, a smart car capable of avoiding a human driver intent on causing a head-on collision has yet to be developed, or even contemplated.

The use of bomb-carrying robots is certainly nothing new in the military. Guided missiles of all sorts have been around for decades. What’s new is the use on a civilian on US soil, and that makes the event more real — at least to me. I can’t help thinking of SkyNet in the Terminator movies or of the RoboCop series. I’m not saying that robotics shouldn’t be used to save lives in a situation like that in Texas, just that the implications for future armed robotic systems deployed domestically are worth considering. For example, I’m all for developing drones to support our troops overseas, but I’m not ready for armed drones to be circling overhead in Boston.

Clearly, gone are the Three Laws of Robotics as proposed by the sci-fi writer, Asimov:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If I may offer a more pragmatic, modern set of laws that follows current use patterns, they would read as follows:

  • A robot may not injure a second human being or, through inaction, allow a second human being to come to harm, unless directed to do so by a first human being, either through software instruction or direct command.
  • A robot must obey the orders given it by the first human being, with direct command taking priority over software instructions.
  • A robot must protect its own existence as long as such protection does not endanger the first human being.

In my set of laws, there’s clearly an “us versus them” backdrop, with the second humans that deserve to be harmed as separate from the first humans who are in control of the robots.

I’m sure we’ll work out the technical issues in self-driving cars, weaponized robots, and the like. I’m much less certain that the humans involved in the decision making of when and how to rely on these technologies will consistently make the right choices. I’m much more comfortable with Asimov’s proposed laws than I am with my set of laws. I’d love to hear your comments on the serious side of robotics.  SV


Posted by Larry Lemieux on 08/25 at 01:44 PM


Comments



<< Back to blog