Wednesday, February 28, 2024

Frankenstein's Monster: AI's shadow

 

     Alan Turing, in 1950, released a paper called "Computing Machinery and Intelligence" while at the University of Manchester. The main gist of the paper was that if the output of a computer could not be distinguished from that of a human being then it must be considered intelligent. This has been a goal of those working on AI for a long time and it is the primary point of general acclamation that we "have achieved AI". Various AI applications can present artwork, prose, dialogs, and other such things that cannot easily be distinguished from that created by a human. In a "blind test" (where the people judging have no pre-knowledge of which output was done by a computer) it would pass judgement.

     No one is saying that AI has reached the goal of achieving human thought. AI is trained (as is true of humans) on lots of input -- lots of data from lots of sources and, iteratively, told what is right or wrong and "learns" how to do things, respond to things, and so forth. Which leads us to two of the current problems with AI. These are intellectual property laws and lack of discrimination.

     Intellectual property conflicts are the most straight-forward. In order to train, lots and lots of information must be fed in. While some of that information is data obtained from various public sources, other information is from private, proprietary, sources and other information that has been created by human minds and may be part of their livelihood as well as their career growth and reputation. That "brushstroke" done by a generative AI may be copied from an artist's work. That "narrative work of imagination" may be borrowing from the insights of dozens of writers who, justifiably, do not want their copyrighted material used without their permission. AI generated code may (and almost definitely will) make use of code that was created both for freeware as well as company proprietary code (that probably was not authorized to even be accessible via the Internet).

     Currently, an AI information gathering product can present bad information. As an AI model is being taught, it is given access to a lot of information. Hopefully, most of this information with be factual and truthful. But some of it will be misinformation (lies), disinformation (logically faulty or irrational), or declared works of fiction. Some of these can be eliminated by human monitors as the model is trained but this just leaves it in the hands of those humans.

     Humans do not have a good track record in doing the research needed to determine truthfulness of information -- they cannot be relied upon to do such for an AI model. It is conceivable that an AI model might be programmed with algorithms that will allow it to compare information, reliability of sources, and logical inconsistencies. They even would have an advantage since they do not (currently) have emotions. But AI models are NOT at that point now -- and it will depend on the accuracy of any such algorithms.

     Isaac Asimov, considered one of the best of the "classic" science fiction authors, used as part of his robot-focused novels an idea called "The Three Laws of Robotics". I'll let people investigate the evolution of the laws, direct quoting of the laws, and the interpretations as discussed within the various stories produced. Let's just say these "laws" are meant to be a "leash" to prevent robots (or AI) from hurting people. And, though the laws work pretty well in the stories, there are almost always loopholes in any "law" and the three laws are no exception.

     But current AI has no such thing as the three laws -- and no practical way in sight to create such laws and to enforce that they will be part of any, and all, AI products. While AI is monitored, and final decisions are made by humans, then the moral (and legal) responsibility for any problems will be in the hands of humans. If unmonitored and autonomous, it is unknown as to where the responsibility would lie. There are presently court cases struggling to delineate such matters in the relatively straight-forward arena of self-driving cars.

     Computers, and programs, are NOT "smart". Compared to most humans, computers can be considered rather poor in intellect. They can only do very, very simple things. Arithmetic, comparisons, actions based on values, and such are the very limited set of actions that they can perform. Computer programs eventually are split apart into these very simple "machine" instructions. BUT, computers can do these simple instructions very, very fast (and getting faster every year). This gives the illusion of great ability by the computer.

     Humans make mistakes. Computers (or their hardware and software) can make mistakes. But computers can make many more mistakes, much faster, than humans because their basic strength is speed. Plus, the more humans rely upon computers (and their hardware/software) the greater the difficulty of correcting errors because many people erroneously think that computers "cannot make mistakes". Even worse, humans are often not allowed the ability to override the decisions that come out of computer programs.

     This is particularly relevant in many of the areas of AI. People do a poor job of facial recognition. Computers can do a better job but not a perfect job. If drones are given a locate (and, perhaps, violent action) instruction autonomously then they will likely sometimes pick the wrong person. The situation is, of course, even worse if computers get to make the decisions about all-out war (such as in "WarGames", "Terminator", or many other apocalyptic films). Politicians are quite at ease in having "collateral damage" but if you, or someone you love, is part of that collateral damage then you would probably not feel as calm about it. If facial recognition is done as part of security then "doppelgangers" or others who look similar to those on no-fly lists (or other lists) will forever be fighting for their right to exist.

     Computers, their programs, and AI can be of great benefit to humans. But they should always be subordinate to humans because humans are the ones that suffer from any mistakes made. Recognition of the fallibility of computers is very important and self-regulation within AI programs is essential to reduce the escalation of errors.

To Waste or to Waist: That is the question

       As is true of many people growing up in the US, I was encouraged to always clean my plate (encouraged is putting it mildly -- I remem...