Tuesday, May 21, 2024

Ethics: Always in process

 

     There is a lot of talk about ethics currently -- especially in relation to developments in AI. According to the dictionary, ethics are the moral principles that govern behavior. Note there is nothing about "universal" or "unchanging" in the definition. Ethics are based within a specific society and their rules of behavior. Societies vary a lot in their viewpoints on many issues -- sex, gender roles, religion, death, expected behaviors, taboos, and so forth. A society that expects resurrection of the individual will look at death a lot differently from a society that expects only one life (whether they believe in an "afterlife" or not).

     Ethics also change along with societal norms. Fifty years ago, there were a set of behaviors that were normal, and expected, from the members of the community. Now, in this present time, some of those behaviors are no longer acceptable. The ethics relating to current societal norms may reasonably be applied to behaviors that are happening now. They can also be applied to past societies, writings, and other memorabilia but ONLY as if one is using a microscope. They can be examined and the difference between the ethics of that time and the ethics of the current time can be looked at as documentation of change. But the past behavior WAS acceptable at the time that it happened because the rule set, the ethics of the time, was different. Each period of time and each distinct society has its own rule books that should not be applied to judge behavior within other times/societies.

     The above is true -- but only in the abstract. In real life, everyone believes that their current ethics are THE correct ethics and ethics that differ are WRONG. People with different ethics -- whether of the past or of a different society or culture -- are BAD. This would be true in the opposite direction of course -- people of the past would consider OUR behavior to be BAD when judged according to the ethics of the past. This almost always leads to conflict and even to wars.

     A cultural anthropologist must always be very careful upon entering into a different culture/society because they are there to examine, analyze, and document. Since the new ethics are not known, it is easy for them to violate those in some manner which would make their work much more difficult or even impossible.

     We have been talking about the flux of ethics. Within the current time and current society there is a set of ethics rules that applies to everyone within that time/society. If they do not follow them they have to face social, and possibly legal or martial,  consequences. This is what is typically called an ethics problem.

     AI can face issues similar to that of human individuals. One is the ownership of intellectual property. For humans, taking such is called plagiarism. Generative AI systems need to be "trained" by giving it access to much information -- not all of which is legal for general use. In that way, AI may be considered to be plagiarizing or even stealing from others within the same subject matter? It may also affect the value of those systems, or people's valuable intellectual property, from which it absorbs information? I put question marks because those are part of the questions posed about defining ethics for AI systems.

     Humans can (and do -- much too often) lie. They do such deliberately (called with malice) and accidentally (by not verifying information before passing it along). If AI is trained with bad information (deliberately bad or non-verified information) then IT will use that bad information and pass it along as "good". For humans, this is called slander or libel -- but often not subjected to legal ramifications if done within social media. What is such when done by AI? Can this distortion of reality, and worldview, be considered a dangerous crime by AI systems? Another area of ethics in connection with AI systems.

     Humans can, and do, commit crimes as defined by the legal systems of their society. AI systems can be trained to do such faster, and possibly less noticeably, than that which is done by humans. AI systems can be trained to "phish", steal private (supposedly inaccessible) data, forge accounts, and other non-physical actions. If given access to "waldos" (physical systems capable of being controlled online) then they can even do physical crimes. What is the legal aspects, and ethics, of using AI systems for such actions?

     Humans can, and do, violate ethics systems. AI can be trained, or designed, to do similar violations and be done faster, and more effectively, than humans. These are the areas that badly need to be addressed before the problems that will occur become too unwieldy to deal with.

No comments:

Anchors Aweigh: The Costs of Accumulation

       Almost ten years ago (December 31, 2015), I wrote about " the houseboat philosophy ". A summary would be that houseboats ca...