Monday, January 15, 2018

Controlled falling: how to teach a robot to walk


     Every part of growing up is a miracle in its own way. However, if you happen to be an engineer or a computer scientist, you may find yourself looking at your child in a somewhat different way than most parents. Every act is a matter of "how did they do that?" Or, a matter of "I didn't know they couldn't do that originally".
     Learning to walk is a gradual process. The first part is a matter of figuring out just how to control those wonderful muscles on purpose. For fortunate babies, they have a working nervous system and all of the appropriate muscles are there but that doesn't mean that they pop out into the world ready to do a 100-yard dash. Think of a control room with hundreds, or thousands, of unlabelled switches -- each of which cause a muscle to respond in some way. How do we use an electrical switch box which has lots of unlabelled switches? Try them out and see what they do. (And then, perhaps, label them after we notice their effects.) For a robot, this is a bit simpler as there is a specific control register (or bit within a register) that causes a specific servo-motor to work.
     Now that the child knows what muscle connects to each impulse (and I am not going to try to pretend that I know just how this really takes effect), she (or he) has to practice. This may entail kicking dad in the face a few times and laughing or hitting brother in the nose. Strength is developed as the muscles are exercised. And a special sense (not always fully present in autistic children and others) called "proprioception" starts to be better known. Proprioception is also sometimes known as "body sense" or "kinesthetic awareness". No matter what you want to call it -- it allows us to know just where our body parts are. Is my finger extended? Is my leg bent? This is important if we want to apply the right muscle at the right time.
     For a robot, this has to be done in different ways (although, once again, I do not claim to know just how body sense works within a human). One dominant method is to keep track of relative position. This works like the cursor on a screen -- when the system is powered on, a specific point is considered "home" position and the cursor is moved relative to that position. The same can be done with any servo-mechanism between the limits of its movement. However, it must start at a known location and there cannot be any exterior limit on the movement (which would cause a need for recalibration). Other methods are possible but require more active sensors (and, thus, are more expensive).
     Two more requirements exist for easy movement. These are the ability to know how hard a muscle is pushing against something (the floor, for example) and how fast it is moving. The human nervous system makes use of tactile feedback to determine how hard the muscle is straining and the body sense to know how fast it is going. With a robot, a feedback loop using torque measurement may allow the robotic arm to hold an egg -- or to crush it. Speed is determined by the rate of change of movement -- how fast position changes versus and internal clock.
     With these four aspects -- ability to move, knowing where the parts are, knowledge of amount of force, and knowledge of speed -- coordinated movement is possible. Early programming of robots tried to imitate the specific movements of human muscles within their ranges of motion. It is possible to do it this way provided that there is complete control of the environment. Nothing in the wrong place, no unexpected alterations in the footing or the locations of other relevant objects. Consider a factory line with fully repetitious movement and behaviors (until a part sticks or parts run out or a dog runs into the factory ... or) and one can relatively easily see a robot taking over the factory job. In fact, many of the jobs taken over so far have been of this nature. 100% replacement is not possible because of the many exceptions that can take place and which requires more flexibility to handle -- but a considerable reduction in human staff is possible.
     But we were talking about walking weren't we? Could we use the same methodical programming to teach a robot to walk? Barely possible but, once again, only within a highly controlled environment.
     Imagine that child learning to walk. They stretch. They pull. They start becoming caterpillars on the carpet while they both strengthen and practice their muscles. Finally, they pull themselves up. And fall down. And go up. And fall down. Then they are able to stay standing up -- but hanging on. Then they let go. And fall down. And so on.
     This is a type of programming -- but not "linear" programming. This is not "do A, followed by B and then C". It isn't even exception-handling programming "do A, followed by B, then D if condition C else do E". This is neural programming. Sequences are attempted and then, based on results, discarded or modified or increased. A goal has been set and if enough sequences are tried then, at some point, success will be reached.
     Note that a new item has now been added -- a goal. In order to have a goal there must be a way to determine if you have reached that goal. For a child that is emulating other people who are walking. For a robot, it is necessary to have goals that can be specifically quantified -- expressed as numbers -- against precise targets. For walking that might be obtaining a certain height, directional velocity, and stability. Note that balance, for a human, is obtained by the feedback from the inner ear. Tools, such as gyroscopes, are available to both help maintain, and recognize loss of, stability. Laser positioning devices can be used to indicate height. Global Positioning System (GPS) information can be used for large-scale movement for direction and a combination of position and speed tracking can be used for shorter distance velocity calculations. I am sure that other tools also exist.
     For a child, they see others walk -- and those others encourage them (and protect and guide) -- and they go through a seemingly never-ended process of trial and error. They train parts of their brain and nervous system such that the thought "walk" indicates a complex series of changes, movements, and activities. I shudder to think of trying to program that linearly.
     A robot can learn in the same manner but they have to have ALL of the correct tools -- servo-motors, proper range of motion, torque feedback, auto-recognition, or storage (with its likelihood of losing calibration), of movement, and so forth. As long as they have a goal against they can match their efforts, they can keep trying combinations until they succeed. However, there is a "secondary" aspect of this type of learning -- to keep the "winning" processes and discard the "losing" processes. Humans do this (in some way that I cannot explain) but robots have to do it also. In many ways this is even more difficult because it is unlikely that the next attempt will be EXACTLY like the one in which they previously "won".
     As a note, other types of activities can be approached in the same manner -- trial and error measured against a goal. But the less physical the more difficult the definition of the goal.
     #robotics,#AI,#NeuralProgramming

User Interfaces: When and Who should be designing them and why?

     I am striving to move over from blogs to subscription Substack newsletters. If you have interest in my meanderings please feel free to ...