Tuesday, June 26, 2007

How do we recognize what we are seeing?


We don't just see people and things, we recognize them for who and what they are. Most of us take this ability for granted, but it is a very complicated process - and one not yet fully understood by scientists.

One theory - called the feature-detection model of vision - suggests that individual cells along the visual pathway are pre-programmed to respond to certain shapes. Cells programmed to recognize different types of curved lines, for instance, might work together to recognize a face.

Feature detection led to speculation (most of it humorous) about the existence of a "grandmother cell," one single cell in your brain imprinted with the image of your grandmother. But the grandmother-cell model implied the brain would need to assign a different cell to everything seen in a lifetime, and this data-storage task would be too great even for the adept human brain. Critics of the feature-detection model have also pointed out that the process of assembling all those shapes into an image would be clumsy and take too long.

In recent years, the feature-detection model has been modified by the spatial-frequency theory, which sees images as compositions created by the brain out of variations of light and dark. In this model, the brain takes the differing wavelengths of light and dark reflected by an object and translates them into it own "computer code." The low-frequency wavelengths give the brain a kind of fuzzy outline of the image, while the high frequencies fill in the details.

No comments: