M068402b8978cf01c28fa5e7a66b282e74

MEMBERS


M068402b8978cf01c28fa5e7a66b282e74

ARCHIVES


RECENT ENTRIES

    M268402b8978cf01c28fa5e7a66b282e74

SYNDICATE


MAILING LIST

M368402b8978cf01c28fa5e7a66b282e74

Biomimetics and Perception

The same could be said for modern robotics. We haven’t quite figured out how to construct a fully deformable platform akin to the T-1000 liquid metal robot in Terminator 2. Certainly, there are material science and computer science issues involved. However, I think that the greatest challenges to creating a robot that could pass for a human are the perceptual issues that are less obvious than, say, how to mimic the pseudopod generation in the amoeba.

Why build something that could pass for a human or a robot with one or two human traits? Take enabling a search and rescue robot to identify and localize the sounds of victims trapped under the rubble of a collapsed building. Seems straightforward enough, right? Simply mimic the physiology and to some extent, the anatomy of the human ear. There’s clearly practical value in such a robot, if it could be constructed.

Let’s start with the basic sound localization specifications of the human ear. It’s well known that the human ear is sensitive to the relative amplitude and phase of acoustic vibrations. Furthermore, the directional characteristics of our external ears modify the vibrations reaching each ear — especially audio frequencies less than about 6 kHz.

Another factor that contributes to our ability to localize sounds is the equivalent of sensor fusion from multiple sense organs. Auditory cues are combined with information from the position and movement sense organ in the ears, eyes, and motion, and position sense organs in the muscles, tendons, and joints. To get a sense for this sensor fusion in action, consider the automatic reflex action of rotating the head from side to side to better localize the source of a sound. The resulting variation in the relative amplitude and phase relationships of signals reaching the ears provides the auditory system with additional data points that are used to more accurately localize the signal source.

It’s easy enough to mimic these capabilities. I’ve done so with a microcontroller, a few directional microphones, and a few additional sensors. While the system is useful in localizing sounds, the results don’t match those of a human. Why? It turns out that several properties of the human auditory system defy explanation on a strictly physiological or anatomical basis, but are instead best understood in terms of human perception of sound or psychoacoustics.

The psychoacoustic property most applicable to localization is perceived intensity. The perceived intensity of a sound is a function of the audio signal’s duration. While sounds that last longer than about 250 ms and are of equal amplitude, they are perceived as having equal intensity; shorter duration sounds of the same amplitude are perceived to have a lower intensity. Quantitatively, a decade increase in duration, say, from 50 ms to 500 ms, is equivalent to a 10 dB increase in intensity — as long as it involves crossing the 250 ms threshold. There are other psychoacoustic properties that don’t directly affect our ability to localize sounds. For example, through conditioning, some sounds are pleasant and others are annoying.

So, what’s the practical take-away from this minutia about human hearing? The point is that you can’t limit your mimicry to the system you’re studying. If your goal is to duplicate human capabilities — whether in vision, hearing, touch, or smell — don’t forget to include the perceptual components of the system you’re attempting to mimic. It’s easy enough to model the effects of sound duration on perceived intensity — once you know that they exist. SV

 


Posted by Michael Kaudze on 10/24 at 09:41 AM


Comments



<< Back to blog