Thursday, June 11, 2020

AI and the "3 Laws of Robotics" : Could it be done?


     For those of us who crawled under the covers with a flashlight as we were growing up, Isaac Asimov will probably always be the first thing we think of when the word "robot" turns up. Of course, in Asimov's world robots were always humanoid in appearance. In today's world, with robots becoming more and more integrated into society, we recognize that a human-appearing robot will fulfill only specialized niches,
     The robot (from the Czech word, robota, for "forced labor") only earns that definition if, once given a directive, it can work toward a variety of goals without further instruction. This ability is based on Artificial Intelligence (AI) (though I doubt there is a consensus on the definition of AI). The more, and larger variety, of autonomous actions that can be performed the "higher" the level of AI. Some (such as Alan Turing) believe that AI has only been achieved if one cannot distinguish between the "artificial" responder and a human one.
     Humans have the distinction of being able to do a huge variety of activities, including activities that have never been done before. So, for AI to approach human capabilities, the mechanism would need to be programmed to do them. Hundreds and thousands of tasks. Often such tasks are truly composed of "subtasks" such that those thousand tasks may be made up of, perhaps, 75 subtasks put together in different specific sequences.
     This is rather a lot of work, however, and it would be so much "easier" if the robot could determine how to do new tasks by itself. If a robot is pre-programmed, it can only do what it has precisely been told how to do. "Scratch the right side of the nose" is a different task from "Scratch the left side of the nose". (Sensors could be installed that activate a signal called "itch" and then the program could say "scratch the place that itches".) This self-programming is often called machine learning, which is considered to be a subset of AI.
     Many people are afraid of robots taking over from humans. Whether they develop a contempt of humans -- as within the "Terminator" series or being used as supersoldiers to control, hurt, or kill humans. In order to help people have a more compassionate and trusting view of robots, Isaac Asimov invented the 3 Laws of Robotics which are as follows:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This is a hierarchy of importance: human life, following orders, self-preservation. For humans, self-preservation may sometimes take the top position -- the idea being, once again, that human life is of more importance than robotic life.
     We will deal only with the "First Law" in this blog. This breaks into two categories -- active harm and lack of prevention of harm (when known and possible). For pre-programmed robots, it is easy to avoid the first -- just don't program that task. For self-programmed robots, there needs to be an "override" program to prevent action if it will cause harm.
     The second, prevention of harm, is much, much, harder as it requires "judgement" -- deciding how dangerous a situation is and what various contingencies might arise that put people into danger. This is the situation currently facing those who are trying to program self-driving cars. The problem is similar between pre-programmed and self-programmed robots/mechanisms. For pre-programmed robots, all of the contingencies must be thought of -- good luck, but it isn't likely to be possible. It is possible to envision a self-programmed robot to "learn" enough to be as good as a human in their judgement -- but we currently don't know how to do that. Maybe we will learn.
     The "other hand" to this, however, is that implementation of the "First Law" is purposeful. It must be done deliberately. There are too many people, and too many groups, that currently have people deliberately hurting other people. It is not reasonable to think that usage of robots will be any better than usage of people. This leads us to the world of the "Terminator", programmed face-recognition assault drones, supersoldiers and all of the nightmares of technologic fear.
     In Asimov's world, "U.S. Robotics" had a monopoly on building robots and was a benevolent entity that insisted on the 3 laws being in each "positronic" brain for their robots. In our world, there is no effective control of technology in a preventative mode (we can make laws to penalize use after the face). I cannot imagine how we can prevent some of this harm from occurring but there are a LOT of very creative, imaginative, intelligent people out there. Please start working on the problem while it is still theoretical.

User Interfaces: When and Who should be designing them and why?

     I am striving to move over from blogs to subscription Substack newsletters. If you have interest in my meanderings please feel free to ...