Tuesday, June 21, 2022

The Turing Test: Not enough?

 

     Of late, there have been declarations of Artificial Intelligence (AI) becoming sentient. And even louder protestations that it isn't true. Within computer science, there has been an assumption that this can be evaluated by use of what is called "The Turing Test". Created by Alan Turing, a British mathematician and considered to be a founding person in AI, in 1950 -- it was initially called an imitation test. The idea was that if you were unable to distinguish the conversation of a program from the conversation of a person the program could be considered sentient.

     This was in 1950. People conversed on a regular basis in 1950. The criteria of being able to distinguish a person from a program, by use of analysis of conversation, seemed reasonable. Forward the clock to 2022 and that criteria no longer seems to be sufficient. For those who say that AI has reached the criteria set out by Alan Turing -- they are probably correct. There are programs that speak language BETTER than an average person in that language. According to the Turing test, they are sapient. But, according to our current experiences of programs, machine learning, and AI -- it seems that it would be hard to consider them to have had a passing grade for sentiency.

     Which leaves us to the question, once again, of what is sentience? Some of the recent declarations indicate the appearance of emotions, desires. Are these what distinguish sentiency from mechanical responses? Of course, one can move to the religious, spiritual, and theological and say that what we are searching for is a soul. But, if it is a matter of a soul then we are in even worse shape for evaluation. There is no consensus as to what a soul is. If it does exist, where does it reside? If it does exist, is it immortal or connected to something associated with a living mind and only in existence as long as that mind and body exist?

     Science fiction and fantasy -- those genres of "what if" -- have approached such a subject for many decades. But general agreement, even less the matter of consensus, is nowhere to be found.

     If we use the criteria of a desire for survival and a desire for progeny then the state of AI has not been achieved. But, if those are to be criteria, should a goal of reaching such be striven for? Mary Shelley's Frankenstein reflects many matters: human hubris, the difficulty of accepting others, fear of the stranger (xenophobia). But, as seen in the dystopia of the Terminator series, a desire for survival and progeny coordinated with a superiority in speed and execution may prove to be an ultimate competitor to the human species.

     So, let's move back to the start. If we use the Turing Test, alone, as a criterion for sentiency -- we are probably there. If that is not sufficient, what qualities should be measured -- and how should they be measured? And, in some situations, perhaps creating that which meets such criteria may be something that would be a mistake to achieve?

Biases and Prejudices: There is a difference

       It is always difficult to choose people on a jury. Every potential juror has a history, education, and daily life which influences th...