Saturday, February 11, 2017

Artificial Intelligence: Beyond the Turing test


     In 1950, the British mathematician Alan Turing gave an answer to the question -- how can you tell if a machine is intelligent? His (paraphrased) response was "if you cannot tell the difference between a human answering questions and a machine answering questions then it has achieved intelligence". This Turing Test is not universally accepted but it is probably the most widely used foundation of answering the question of what is Artificial Intelligence (AI).
     Alan Turing's test was based on the idea of an interviewer and a responder. Someone asks a question and someone answers a question. This led to a series of experiments in computer programs that simulated (or imitated) "normal" human interviewer/questioner situations. It might be between a therapist and patient or doctor and patient or a student and professor/teacher. Naturally, there had to be a way to make it impossible to physically tell whether it was a machine or not. It also had, built into the test, the requirement of equivalent skill in understanding and speaking/responding in a human language.
     In today's world, computer programs have advanced beyond simple questions and answers. We have computer programs beating humans in Chess, and Go, (and other games). The Turing Test might not be considered to apply to these situations but many people would consider this a form of AI. We have computer program/systems that make use of pattern recognition to identify potential suspects or targets of drones. So far, the final decision is still made by humans but stories/films such as The Minority Report indicate a possibility of the machines making final decisions even about what might happen.
     That is the "line in the sand" for people thinking about AI. Who makes the final decisions? Is it a human (with all of her, or his, faults and experience) or a machine (who, at heart, is still the results of a programmer's abilities and recognition of exceptions)? Isaac Asimov, in his Three Laws of Robotics, had the AI programming include self-restraints as to what the program/robot could do, or could not do, without undergoing self-destruction.
     Speed and safety. The primary reason for computer programs is NOT that they can do things that humans cannot do; the primary reason is that they do things much, much, faster (and reproducibly). So, if you design an AI that handles the coordination and operation of a nuclear reactor, you want the program to be able to respond very quickly. Putting a human into the decision path slows everything down. Who has the final responsibility?
     The same question exists within the possibility of self-driving automobiles and trucks. It is likely that AI programs can already drive as well as an average driver -- assuming that all of their sensors work properly (they can detect objects and highway lines and sounds and bouncing balls and the cars and buildings around them, ...). Certainly, in another five or ten years, AI self-driving programs will be able to control a vehicle much safer (and more rationally -- no road rage potential) than humans. But they would be making the final decision.
     If a self-driving AI makes a mistake or a necessary decision that costs lives, who has the responsibility? The programmer? The company that built the vehicle? The owner of the vehicle? What happens if the self-driving car is involved in an accident with a human-driven car? Is there presumption of innocence on the part of the self-driving car?
     In all these cases, the program and machine are taking the place of the human. If you keep them "behind the curtain" there may be no way to identify whether they are human or machine. They PASS the Turing Test. But, when the curtain is removed, what is the final verdict? Who/what has the responsibility? Who/what makes the final decision?

No comments:

Interrupt Driven: Design and Alternatives

       It should not be surprising that there are many aspects of computer architecture which mirror how humans think and behave. Humans des...