I took an AI course at the University of Idaho in the fall of 1976. Forty-nine years later they are releasing products that have that name associated with them. Is this the same AI as was talked about in 1976? No, not really. If you search for what kinds of AI exist, you will get various lists according to capability, functionality, and techniques/approach. The current AI implementations are at the beginning levels — often called “Narrow”.
The course that I took in 1976 was more a matter of showing early investigation of programs such as Eliza and early approaches to Machine Learning (ML). Memory capacity, as well as CPU capabilities, were very, very small compared to that of today — and that limited implementations of various approaches. As is true of all computer programs, the main advantage of the computer doing it is that it can do things very, very, fast. Today’s computing power can achieve results only dreamed of in 1976. (And, if the reality is desired, it takes a lot of electricity and hardware to provide the capabilities even today.)
GIGO
Current Narrow AI implementations such as ChatGPT, Siri, Google maps, or facial recognition are focused on a single type of task. Even the task of answering questions (ChatGPT, etc.) is a single focus. Responses are made according to the data that the program has had access to — and the training (yes/correct, no/incorrect) about appropriate responses. Two problems (among many) arise — are the data that are examined valid (do they meet definitions of facts and correctness) and is there bias (usually according to the trainer) — if they are given proper data but the trainer tells them to reject the proper data in favor of dubious, or clearly incorrect, data?
Whether it is because of faulty data sources or deliberate subversion by the trainer, these are instances of “Garbage In Garbage Out” (GIGO). This is an old term used by early programmers but it is still valid today. If bad data are part (or all) of the input then the output cannot be trusted. A search response collator (such as ChatGPT) must be treated the same as individual searches which bring back various “hits” at different websites. In other words, even responses from “AI” bots need to be fact-checked.
Even though you should not blindly trust the output, the use of a Narrow AI can be very useful. It can be used to very quickly gather, compare, and present results similar to the results if you had done a search (or multiple searches), read through the contents, and correlated the contents. The differences are that the Narrow AI can do it much faster but, since it does not have true creative/interpolating intelligence, it does have the caveats mentioned above.
Creativity
When a Narrow AI presents you with a result in the form that you might submit somewhere or directly use, recognize that this is an “average” of many sets of example data. Although the definitions of such things as “average” are a bit vague — you will be presented with results that are somewhat the average of many possible results. It is very unlikely to be of the very highest quality but is likely to be adequate for your purposes. Personally, unless you have absolutely no skills in the area of the query, I would suggest using the AI result only as a starting template. Change, emphasize, and make it your own.
Permissions
When a Narrow AI is trained on data, it just grabs data from wherever it has access. When presenting results, the sources are fully hidden from the user. Unlike what would happen if you did the same research, it is difficult (perhaps impossible) to know how much came from this and how much came from that. There is no explicit cautioning about taking the source into consideration and respecting intellectual property rights. This is a legal issue that needs to be pursued (soon) by various governments and legal bodies.
Timeliness
When you do a search through the Internet, you have the option of age of responses. However, one of those options is to get only the very latest (say — past month) data. When you use a collating “AI” search bot, you do not know just how old the data are. During the first beta testing periods of the various collating search bots, they had an explicit warning that data was obtained only from sources older than a specific date — current data was not included. Lately, such warnings have disappeared but that does not mean that the latest data is incorporated. Given the way that these programs must be trained, it is very unlikely that the latest data are used.
Summary
Use of an AI program can be beneficial. It can help you create a good, average, example of your desired result. It can speed up searches and correlations of large amounts of data. Results will not be as good as that from expert human intelligence. And there are potential permissions conflicts. Do not treat AI collating bot results as firm, objective, truth — treat all results the same as you would of any other search of data throughout the Internet — with care and a need for fact-checking of potentially subjective results.