Behavior Change — A Hallmark of Intelligence
By Bryan Bergeron
Intelligence — broadly defined — is the ability to adapt. In nature, the simplest life forms adapt through swapping genetic material during reproduction and through genetic mutation. For example, if one bacterium out of 5,000,000 carries a mutation that enables it to detect and avoid a poisonous chemical, then the descendants of that bacterium may possess the same ability. Mutations allow for changes in behavior at the population level.
Humans and other higher life forms also adapt through swapping of genetic material and mutation, but a significant change in instinctive behavior through these mechanisms requires tens to thousands of years. In the short term, mice and men adapt through learning. As a result, no two humans – or mice – will respond in exactly the same way to changes in the environment. In the face of danger, one might stay and fight, one might run away, and one might freeze with fear. The best course of action depends on the circumstances.
A simple carpet crawler that is programmed to respond consistently to a given stimulus is hardly considered intelligent. Assuming you have a rover using the typical subsumption architecture in which overall behavior is based on layers of fixed stimulus response code, the robot will behave predictably. Fixed responses can be a good thing for a robotic welding machine that must replicate the same weld hundreds of times a day.
However, if the goal is to enable a carpet crawler to escape a maze in the shortest time possible, the ability to adapt is essential. Machines with intelligence can have practical uses, as well. Consider a toaster that learns to associate, say, your fingerprint or even your face with your toast preference. The point of this editorial is to encourage you to start experimenting with various forms of learning. If you’re new to machine intelligence, I suggest you start simple by adding some randomness to your robot’s behavior.
For example, when the bumper sensor on your carpet crawler is activated, does your bot always reverse to the left? If so, why not make its behavior a little more interesting by having it either back up to the right, left, or straight back, based on a randomly generated number. Next, try experimenting with different weights – say 60% of the time, have your crawler move back to the right, 20% of the time have it move straight back, and 20% of the time have it move back to the left. Once you have a feel for basic behavior combinations, start investigating the myriad approaches to machine intelligence. Again, before you start coding neural networks on field programmable gate arrays (FPGAs), go for simple. Instead of manually assigning weights to random behavior, enable your bot to learn through experience.
For example, let’s say your carpet crawler upon colliding with a barrier, reverses to the left, right, or back at random. Statistically, your bot should exhibit one of these three behaviors one-third of the time. Now, let’s create a rule that says if, during the execution of one of these behaviors that a rear bumper sensor is activated, then that behavior should be less likely. In other words, your program should automatically reduce the weight placed on that reversing behavior because it simply makes matters worse.
Whether it’s determining how to reverse course or when to accelerate, the best behavior depends on the circumstances and the environment. I suggest that the goal isn’t to simply make behavior change arbitrarily, but to enable your bot to adapt to the environment, That is, to learn. As you’ll discover, fully exploring adaptive behavior may require an investment of a few more sensors and more processing power, but the journey is well worth it. SV
Posted by Michael Kaudze on 01/13 at 03:31 PM