Showing posts with label memory. Show all posts
Showing posts with label memory. Show all posts

Friday, January 26, 2024

Mission Impossible: concurrent multitasking for individuals

 

    There is always a temptation to try to work on more than one thing at a time. Back in the long ago, it might take a half hour or more for a compilation of a program to complete. During that time, one did something else -- possibly even something useful (isn't playing games useful?). But that was not truly multitasking. I was either working on the program, submitting it, playing games, or testing the program after it completed compilation. Compiling was a background task with which there was no active effort.

     It is certainly possible to work on more than one thing at a time if you have a group of people. One plus one does not quite equal two -- but it certainly allows more to be done. Reduction of need to interact helps the efficiency (but not necessarily the quality). As is seen in "The Mythical Man Month", projects can easily reach a size and complexity such that adding additional people is counter-productive. Teams can still add productivity if each task is delineated sufficiently. Consider a building crew for a house. Two people working together on framing, one person doing insulation, one person doing materials preparation. When tasks can be cleanly split, productivity from multitasking reaches its best -- but each person (or "processor") is working as an individual unit.

     As implied, a computer can make use of multiple processors -- or cores -- to allow simultaneous task performance. Four core, six core, eight core (usually in multiples of two -- does anyone know why?). Once again, the scheduling of software being performed must be coordinated between the cores.

     This blog, however, is primarily aimed at multitasking for individuals (or, more properly, the inability for a person to multitask). It may be easier to explain this by going outside of ourselves and use the single processor system as an example. A processor is moving along, performing the actions required by a particular program. Then, for whatever reason (operating systems, timers, and scheduling algorithms are not current topics), another program needs to be performed. The processor (actually part of yet another program called the operating system -- or done explicitly by each program) needs to "write down" current "context". This context is an image of the current situation at the time of moving to the other program. What is the next line of code to be executed, what are all current (temporary and permanent) results from the program -- all these need to be written down so that, when processing resumes on the current program, it can continue as if nothing had ever interrupted.

     This process -- context switching -- has a certain fixed amount of time needed. So, if you are doing two programs in "multitasking" (true but not concurrent) then there will be time in program one, time to store context, time in program two, store context, restore program one context, time in program one, ... The more often that programs need to swap with each other, the greater the percentage overhead -- this infers that the more programs being swapped between, the smaller active time available per task and the greater percentage overhead.  For example, if dividing up 150 secs of activity between two tasks:

2 tasks present; 75 secs task 1, 25 secs save and swap, 75 secs task2, 25 secs save/swap: 25% overhead

10 tasks present; 15 secs task 1, 25 secs save/swap, 15 secs task2, 25 secs save/swap, 15 secs task3,
25 secs save/swap, 25 secs save/swap, 15 secs task4, 25 secs save/swap, ... : 25/40 = 62.5% overhead

These numbers are greatly simplified (probably would be using nanoseconds, for example) but the principle holds -- the more tasks, the greater the overhead. Note that storage and retrieval of context requires space in addition to time. Too many tasks, too little resources and you have a system unable to do useful work.

     Although there appears to be some similarities between computers and the way humans process information (after all, we did design them) -- we are not the same. We probably do processes much differently than a computer. But the effects can be the same.

     Note that humans are able to walk and chew gum. We can listen to music while writing a letter. This is because different activities use different parts of the brain. In this manner, we have the equivalent of multiple processors -- however, these are not separate general processors -- they are very task-specific processors.

     We can only do one similar category of thing at a time. We can have laundry washing in the background, or a loaf of bread in the oven but those are not tasks in which we are currently active (once we reach the point of taking the laundry out, we are now active again). When we change tasks, we need to keep track of "just what we were doing" at the point of time we changed tasks in order to resume a task. The more tasks we switch between, the less time we have to do each task because of overhead.

     How do we save the context when changing tasks? The process is a statistical curve (maybe not quite a standard bell curve but still ...). When we are young and we get distracted, we may never get back to the original task (which might be the point of the distraction). As we get older, we learn to store context in medium-term memory (maybe jot a short note in addition) and get back to the original. We get "better" at doing more and more tasks -- but we are still decreasing efficiency with each additional task. At some point, we lose context. We cannot remember well enough to resume a task or a set of tasks. We can start recording the context more fully on paper but then we have to file and retrieve that piece of paper.

     From my 66-year-old point-of-view, that is a problem that gets worse with age. Short and medium-term memory gets more clogged with past events/contexts/swaps and we are less efficient at storing and retrieving those contexts. "Why did I go into the kitchen? Where did I put my keys when my son called asking what was for dinner?" At first it is irritating and then, with active understanding, it becomes humorous. I have learned that not everything MUST be remembered -- and that has reduced stress. But it will probably get worse.

     But I am in good company as the ability to task swap is a matter of degree but the challenges exist for all.

Saturday, March 19, 2016

Malleable Memory : What you remember isn't necessarily what occurred


     Memories are that special something that make us who we are. Twins may have the same genetics but, even if raised in the same environment, they cannot have the same memories. You would think that, if this is the case, memory must be the most reliable of attributes of the human condition.

     But this is NOT the case. Memories can be formed in ways that they are inaccurate at the very beginning -- and change over the years to fit in better with other attitudes and stories concerning the subject matter.

     Jodeph Campbell related a local tale of a village that had a mischievous god who visited. There were two rice fields on different sides of a small dirt road. This god put on a big woven hat -- one side was bright red and the other side was bright blue. He put it on his head such that the people working in the field on the right saw the red and the people working in the field on the left saw the blue. He reached the end of the road and turned around -- but, while turning around, he also reversed his hat such that now (going the opposite direction) the people on the right saw the blue side and the people on the left saw the red. Thus, for both walks down the road, people in the field saw the same color.

     When the villagers finished their work for the day, they met in the village and talked about this strange man who walked through the fields wearing a bright red hat. "NO" said others he was wearing a bright blue hat. So, they argued and fought and the god laughed. This is one of the first ways that memory is shaky -- no one can observe everything that there is to be observed and different people will observe different things.

    A second area that changes the formation of memories is that of expectations. These expectations are based on personal histories, biases, and even current events.  During a classroom experiment, two people (without forewarning of the class) entered a classroom -- male and female and of different ethnic backgrounds. They loudly started to argue, fight, and then leave the room. After they left, within five minutes of their arrival and before the students could discuss among themselves, the instructor had the students write down an account of what happened.

     When the instructor read through the accounts, she found that there were not two accounts that read the same. Some of the things that the actors did were reversed -- things that the female did were recounted as things that the male did and so forth. The interpretation of who did what first and which one was justified in their reaction also changed. In general, if a history book was to be written from these first-person accounts there would have to be one per person.

     The third area of moving early memories is that of peer influence. Once a situation is discussed, many people will start changing their memories to that of what the most popular people remember -- or will allow for change based on arguments presented by others of more vocal temperament..

     Within a few days, the people no longer remember any different account. These are some of the many aspects of how initial memory can be altered. In the next blog, we will talk about longer-term memory and how it changes with time and can be altered.

Friday, July 15, 2011

Old Memory


Yes, I walked a mile through the snow to school. But, that really wasn't such a big deal (still isn't). I lived in a small town where walking was quite safe and there were sidewalks in front of almost every house.

But, in terms of computers, I started programming when the main input media was punched cards (I saved a lot of them for shopping lists for a number of years). When I started college, I worked on a "personal" computer that used punched tape along with toggle switches on the main processing unit. The main storage devices for the larger computers (IBM 360 at first, migrating to IBM 370s before I left) were huge disk drives.

However, memory (as mentioned before) can be categorized into temporary working memory and long-term storage memory. Working memory at that time was core memory -- little magnets that looked like donuts and linked together with copper wires. We still call it core memory after the memory of those days. Most working memory of today is now located within DIMMs (Dual In-line Memory Modules -- see Wikipedia) or, for older systems, SIMMs.

The memory modules have great advantages over old core -- speed, size, and capacity. They also generate much less heat which is both an energy savings as well as a design improvement.

Storage memory is another category which has moved from technology to technology. The first was paper (well, the VERY first was probably clay tablets or chiseled stone). For people, writing or drawing stored the data/information and reading brought it back. It's kind of funny, but efforts have been quite intense over the past twenty years to allow computers to do the same thing that humans have done -- to be able to directly make use of printed text and images.

For early computer systems, it was not possible for the computers to directly use text or drawings. They needed a way to detect a contrast between spaces. This usually meant holes. The holes allowed light, or a mechanical probe, to move through the paper. The areas without holes blocked the light or probe. In this way, the computer could read the "bits" (present/non-present, on/off, 1/0) and save it.

Next came magnetic methods. These were primarily on discs and tapes. The technology of disk drive design has developed enormously over the past 30 years such that a portable disk can hold the same data that a room of luggable, replaceable disks did way back when.

Currently, we are moving farther and farther along to making working memory cost-effective to use as storage memory -- which will lead to the next post on "New and future Memory".

Saturday, April 17, 2010

Where did they go?


Well, I decided to be inspired by one of the Blogs I follow -- "The Retirement Bubble" and accept the fact that I just am not going to be a daily blogger. So, I headed to my blog and "ZAP", my last blog was no longer there. In fact, my stat counter for visits (which I watch not go up very fast ) went backwards from 184 to 121.

I'm sure that things of this nature happen to you, also. What can the reason be?

Well, first, of course, there is the jello-like consistency of memory. I could have just imagined that I posted a blog entry last month. People look at memory as the chronicle of the past but it just doesn't really work that way. If you think about doing something enough times, with enough detail, it will blur the boundary between "memory" and "dream". Given an amount of elapsed time, that boundary may easily disappear. I know people who have very fixed "world views" and you can tell them "yes" to a question and, because they just "knew" you were going to say "no", they will HEAR "no" and remember "no". This is a bit more severe of a split between "memory" and "dream". The bottom line is -- one cannot really rely on memory

But, do I think that is what happened? No. Of course it could be vanity -- "other people may not remember correctly but I certainly do". No, the main reason I don't think that was the case is because of the stat number. Everyone has certain areas where their memory is well exercised and more reliable. My stepdaughter can remember what someone wore for a given date within the past few months and practically forever about what SHE was wearing. Other things aren't so important to her and she just doesn't remember. For me, it is numbers. I can visualize that "184" in the stats area and I'm pretty sure it isn't a false memory.

There are some aspects to irregular posting that are certainly suspect. For example, some of the formatting aspects of this blog seem to be strange to me -- but, for that, I will just blame my memory and not doing this blog often enough.

Assuming that I'm not crazy and my memory is not totally faulty, where did the blog go? Two possible avenues seem to rise to the surface. One, a system crashed and the disk got backed up (which, for Google, does seem a bit scary to think that backups are unreliable). Two, I didn't do something correctly to commit the blog into permanent status.

Actually, there is a third possibility but paranoia just isn't my thing. "Someone" could have removed it. Since the blog was concerned with firmware and quality control (and Toyota) I guess that there is a little weight to that but I don't really believe it.

At any rate, my last blog vanished and I'll have to think about it a bit to start over the chain. The last blog was on embedded software (firmware) and quality control. The next was going to be about embedded software on cars, unit testing and system testing -- and the difficulties of fully system testing real-time software for interconnecting modules (groups of software). And the next was going to be quality control and testing in general.

However, since the first of the series has vanished I guess I'll think it all out again.

Google, if you're listening -- maybe YOU can find out what happened to my last blog .

Flaws: Why are flaws so endearing?

     I have a good track record in writing, and publishing, non-fiction and technical books. Having been in computer science areas for over ...