Printer Friendly

A new view of vision: two scientists present a startling theory about how the brain processes visual signals.

A NEW View OF VISION

The brain may operate on the same principles as an FM radio. That is the simple way of describing the conclusions of two federal scientists after eight years of research on the brain's visual system. What they believe they have discovered is not that the brain operates by radio waves, but rather something that neurophysiologists find almost as surprising: The visual system seems to transmit information as a compound signal.

Frequency-modulated (FM) radio and many telecommunications networks send and receive information in a very compact manner, by adding together more than one signal and sending them as a compound wave. When the compound wave reaches its target, it is split back into separate signals. The ability to send compound signals, a technique called multiplexing (SN:2/23/85, p.119), is what allows FM radio stations to broadcast music in stereo.

Now, after thousands of tests of how monkeys react to visual patterns, Barry Richmond of the National Institute of Mental Health (NIMH) and Lance Optican of the National Eye Institute have come up with the multiplex filter hypothesis -- a complex, mathematical theory that challenges scientific orthodoxy by proposing that visual nerves transmit information via multiplexed, encoded signals. If their hypothesis proves correct, it could modify Nobel prize-winning work of 20 years' standing and become the cornerstone of a whole new way of thinking about the brain. In addition, it might change the way computer and robotics researchers approach artificial-intelligence problems and model brain processes.

When a picture is flashed in front of the eye, signals are sent from the eye to the brain in short, machine-gun bursts of nerve cell firings. This collection of signals, called the "spike train," passes from nerve cell to nerve cell in the brain. Standard neurophysiological theory holds that the number of spikes in the spike train determines the message the neuron sends: A very intense, staccato burst of nerve firing might be the strong signal generated by a bright visual pattern, for instance.

Richmond was measuring this neural response in monkeys briefly shown a rectangular bar of light when he noticed something interesting. When the bar position changed, the number of spikes sent along one of the visual neurons didn't change, but the pattern of firing -- the spacing of spikes in the spike train -- did change in a very reproducible way. Richmond could take a measurement with the bar at one position and get one pattern, record other patterns at other positions, then return to the first position and get the first spike-train pattern again. "I knew that anything that regular had to be meaningful," Richmond says.

Others in the lab told him to forget it, that only the intensity of the firing and not the pattern is important, and that people had wasted years looking at those patterns. Besides, they said, there were ways to get rid of the pattern differences in the data so that only firing-intensity data would remain. "Fortunately, I couldn't forget it," says Richmond.

Richmond asked biomedical engineer Optican if there might be some systematic approach the two of them could take in analyzing the spike-train patterns. Richmond hoped eventually to learn whether there was a systematic way of finding what neurons were doing on a basic level, and whether that information could be synthesized into a model of what perception is. Optican, who was involved in systematic, mathematical studies of eye movement, "couldn't believe" studies of visual perception were not done the same way. "I said, 'You mean they don't take a systematic approach to vision?'" he recalls.

Optican discovered that the actions of neurons are very well understood in the retina of the eye, but the meanings of individual nerve signals from the eye become less and less well defined as scientists follow those signals deeper and deeper into the brain. At the far end of the line, only vague theories exist about how neurons form visual images and memories, and no one really knows what goes on at that level.

The most widely accepted theory about how visual images are processed in the brain is called the receptive field hypothesis. The hypothesis holds that the retina is divided into many fields of vision, and in each of these fields there are "feature detectors" that trigger neurons when specific features are glimpsed. For instance, when a square shape falls on one area in the retina, an array of feature detectors sensitive to a square shape might fire one neuron; when another shape is flashed on the same region of the retina, another array might fire a different neuron. In principle, an image could be built up out of information about the presence of many such basic shapes in receptive fields all across the retina. In 1981, David Hubel and Torsten Wiesel shared the Nobel prize in physiology and medicine for work done in the 1960s based on this hypothesis.

In practice, though, Richmond and Optican find the receptive field theory has "all sorts of problems." Richmond says, "In reality, most of the neurons are firing most of the time," so instead of a few neurons telling the brain which features are present, the brain would have to compare all the neurons firing and see which ones are most active. Since there are millions of neurons to compare to each other, Richmond and Optican say the task would be exceedingly difficult, especially if the visual scene is changing quickly. They liken the task to "solving a million simultaneous equations in a million variables." Other scientists have also found parts of the receptive field hypothesis inadequate over the last decade.

So Richmond and Optican decided to start from scratch in testing the patterns seen in the spike train. They decided to approach the problem as engineers, designing a completely new, unbiased testing system. First, they abandoned the square and bar patterns that others thought might be important features in the receptive fields, adopting instead a set of abstract grid patterns called Walsh functions (see illustration).

These patterns are useful in two ways. They have an equal number of black and white squares -- a necessary factor for testing the effects of stimulus brightness. More important, just as any color can be made out of the primary colors, any pattern at all can be made out of the addition of the basic Walsh functions. The most useful quality of Walsh functions is that they can test, as Optican puts it, "all known modes of a system." In other words, like an eye chart that combines tests for color blindness, astigmatism and acuity, Walsh functions can test all the factors that might be significant in perception.

Over four years and thousands of tests, Optican and Richmond recorded the responses of single neurons in the visual cortex of the brain to each Walsh function. Using these data, they proved mathematically two important things: first that the spike-train patterns weren't just random variations, then that the patterns carry information.

Publication of their results in the January 1987 JOURNAL OF NEUROPHYSIOLOGY stirred the interest of neuroscientists. Vernon Mountcastle, a neurophysiologist at Johns Hopkins University in Baltimore, told SCIENCE NEWS there have been many past attempts to replace the receptive field model with another theory, "but this is the first one that is partly successful." Critics countered that Optican and Richmond failed to show that the information contained in the spike-train patterns is passed on to other nerve cells or used in the brain. The patterns might just carry information about the nerve itself, they said, like the squeak of a door that tells you the hinge needs oil but reveals nothing about who is entering the room.

To answer this, Optican and Richmond proposed the multiplex filter model, explaining what might be creating these patterns in the spike train. Their model consists of an array of neural "filters" that gather information from light-sensitive detectors at the retina, change information about the pattern of light into basic spike-train patterns and multiplex them along a neuron. While testing this model, the scientists did two experiments that yielded what Optican calls "truly startling results."

The first experiment tested a basic prediction of the multiplex filter model: If the spike train carries information about the Walsh function by adding together individual signals from the filters in one multiplexed signal, then the multiplexed signals for two Walsh functions, combined, should be the same as the signal produced in response to a pattern that is a combination of those two Walsh functions. This is somewhat like saying that if 1 + 1 = 2, and 3 + 4 = 7, then 2 + 7 should still equal 1 + 1 + 3 + 4. Optican and Richmond performed this experiment and found that the neural response to two added Walsh functions was very close to what they predicted, and so was the response to the pattern that resulted when one Walsh function was subtracted from another. Says Optican, "There's no way this could be an artifact" of the testing method.

The second striking result came after they used a computer to break down all the signals they had recorded using the Walsh functions into the three simplest possible elementary signals. They then graphed the multiplexed signal data in a three-dimensional grid, with each dimension corresponding to one of the computer-generated elementary signals. When they viewed the data in this way, a pattern leapt out at them. Each Walsh function could be represented as a plane of data points in this three-dimensional space; the position of the individual points in a plane was determined by the brightness of the Walsh pattern and the length of time it was shown during the test. "We looked at this and we were shocked," Optican says. "The planes represented an intrinsic neural code."

What they can now show is that the spike-train patterns act exactly as they should to carry encoded information about the Walsh functions to the brain. The graphed planes are mostly separate from each other, so if there were some mechanism in the brain to separate the multiplexed signal, it would be possible to tell what Walsh function is being seen. Furthermore, the planes are arrayed like cards in a Rolodex, and the points where the planes do meet are exactly where they would be expected: at the those points that represent the tests where the brightness was low and the length of time the pattern was shown was very short. "This just shows you that if the image is very dim or flashed by very quickly you have a hard time telling what pattern it is," Richmond says.

Richmond and Optican have submitted their findings for publication. They presented them last November at the annual meeting of the Society for Neuroscience and last May at the Association for Research in Vision and Ophthalmology. As a result, they say, other scientists are now more interested in and receptive to their ideas. "A lot of the resistance has definitely gone," says Optican. He suggests the mathematical nature of their early work proved daunting to some neurophysiologists, and the unpublished results are a lot more striking than the earlier, published articles that concentrated on methods. "It just takes time for people to get used to something really new," says Richmond.

Mortimer Mishkin, chief of the NIMH Laboratory of Neuropsychology where Richmond works, calls the research "potentially revolutionary." The multiplex filter hypothesis doesn't rule out neurons transmitting information simply by being more or less active, Mishkin says, but if the hypothesis is valid, scientists could be ignoring most of the information transmitted because they are ignoring the patterns in the spike train.

Richmond and Optican contend there are many advantages to using a multiplex filter system in perception because the system is nonlinear -- that is, there is not a direct, proportional relationship between the stimulus and the response, and a brighter stimulus might just change the pattern of neural activity rather than cause an increase in activity. "There is an axiom in engineering which says that no matter what linear system you build to do a job, there's a nonlinear system which works better," Optican says. For instance, he points out, the best sound obtainable in a stereo system is a digital compact disk system, which is highly nonlinear because the music is abstracted into a binary code. "Nature seems to know what engineers know," Optican says.

A nonlinear, encoded multiplex filter system would allow much more complex signal processing than the older receptive field model, Optican and Richmond say. One problem the hypothesis might help solve is that of how the brain assigns colors to objects. "Other scientists have found that the brain is parceled up into color areas and form areas, and incoming information is directed into the right areas," Optican says. "But, they have no explanation for how that information gets back together again."

For instance, Richmond explains, if you see a red apple, the information for apple and the information for red are separated and directed to the form and color areas. But how do you know that it is the apple that is red and not the background? The problem is compounded for complex scenes with, say, identical chairs, one red and one green. "You have got to be able to bring that information back together at some point," he says, "but how does the brain know which colors go with which objects?"

With a multiplexed signal, the signal for apple and the signal for red could both carry a common signal, an encoded tag, so that the information could be recombined after being processed in different brain areas.

All of this could be significant for researchers in artificial intelligence and robotics, who often look to biology for cludes to how the brain learns complex behaviors. there is currently a great deal of interest in "neural networks," computer networks that scientists believe may be simple models for how groups of neurons in the brain process information. The neural networks have shown some success in recognizing patterns and "learning" simple behaviors (SN: 4/7/87, p.14; 3/5/88, p.149).

Neural networks, however, rely on the supposition that neurons pass simple strong or weak signals to each other. Incorporating multiplexed signals into a neural network would allow far more elaborate computation and manipulation of information, say some computer scientists.

Scientists trying to make machines that "see" might also borrow the multiplex idea. So far, though, computer engineers have been show to use the kind of encoding that is the basis of the multiplex filter model, says Kristol Koch, a computer vision researcher at the California Institute of Technology in Pasadena. "The temporal dimension is totally neglected in [neural] networks," he says.

Optican and Richmond are now building their own multiplex filter "eye" as a way to test their ideas. They say it should be able to see and learn to recognize patterns. The machine should also be able to easily recognize three-dimensional objects from different angles -- something computers have a hard time doing now -- and be fooled by optical illusions. "If it doesn't do everything the human eye does, then we're not doing it right," Richmond says.

The two scientists now want to decipher the neural code itself by finding the actual primary signals the brain uses to make a multiplex signal. The planes they have already found in the three-dimensional graphs of their data result from breaking the multiplexed signals down to their simplest possible components, not the actual components used by the brain. Optican components used by at a dictionary in a foreign language: "You can see there are words there, but you don't know what they mean." What they are looking for now, he says, is a Rosetta stone to show how the neural messages relate to visual perceptions.
COPYRIGHT 1988 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1988, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Vaughan, Christopher
Publication:Science News
Date:Jul 23, 1988
Words:2630
Previous Article:Climate influence on forest fires.
Next Article:Genetic roadblocks: the body may not always resist cancer, but it does have built-in barricades to slow the spread.
Topics:


Related Articles
The brain in the machine: biologically inspired computer models renew debates over the nature of thought.
Clues to how lead impairs growth, vision.
Retinal transplants let rats see the light.
For distance, eyes see like ears hear.
Visual skills show two-pronged development.
Digital noise sharpens vague images.
Brain may make bright decisions early.
Is noise a neural necessity; nerves can sometimes hear messages better amid babble.
All fired up.
The Seeing Tongue.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters