Printer Friendly

Neural-net neighbors learn from each other.

In their quest to understand how the brain works, Canadian computer scientists have developed a neural network that teaches itself to judge depth and recognize objects.

Neural networks are computer models that mimic information processing done by groups of brain cells. Since the mid-1980s, scientists have used a technique called back propagation to train neural networks to recognize visual patterns or everyday speech. This approach requires that the neural network have an external "teacher" that knows the right answer.

Suzanna Becker and Geoffrey E. Hinton of the Canadian Institute for Advanced Research at the University of Toronto have now created a network whose elements depend on each other for the right answer. In the Jan. 9 NATURE, they describe their mathematical procedure for self-taught neural networks.

The algorithm they use represents one of several approaches in the emerging field of "unsupervised learning" that could lead to smarter neural networks. "It can make training [these networks] easier and less expensive if you can do at least part of the training in an unsupervised way," says Ralph Linsker, a computational neuroscientist with the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y.

With back propagation, a neural network typically learns to recognize images or words by comparing its answer with an answer programmed into the computer. Then the network changes the way in which it processes its data until it finally gets the same result as its teacher.

"But a lot of the learning people do doesn't work like that," Hinton says.

So the Toronto team based its algorithm on the assumption that when neighboring elements sense the same thing, they should come up with the same answer about what that thing is. The researchers set up their network so that elements near one another see adjacent, but not overlapping, parts of an image.

At first, neighboring elements get very different answers, but with each new attempt they change the way they process incoming information, until finally their answers match up. "Rather than have an external teacher, you can think of the network as a little community of modules in which the modules learn from each other," Hinton explains.

Becker and Hinton demonstrated this technique with a computer program that simulates a neural network involved in vision. They programmed the network to "see" a stereo image and to judge the depth of dots on a curved surface. The network consisted of 10 modules, each representing a group of brain cells.

During a simulation, a module acts as if it has received information about dot location from a small patch of nerve cells in each eye. Neighboring patches should perceive the dots as being at almost the same depth; therefore, the corresponding modules should come up with the same answer about how far away the dots are. Becker and Hinton provided the network with 1,000 examples from which it learned to judge depth.

In addition, Hinton and graduate student Richard S. Zemel have used self-teaching to train the neural-network modules to predict an object's size, position and orientation after "seeing" just one end of the object. In these simulations, two modules see opposite ends of the object and then compare and modify their predictions until they can recognize the object no matter what its size or location in space.

Self-teaching takes a long time, sometimes longer than learning through back propagation. But the Toronto team hopes to use the new approach for training complex neural networks. By treating the many processing layers as a hierarchy, "the system can learn a layer at a time," Hinton says.
COPYRIGHT 1992 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1992, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:computer model of brain cells
Author:Pennisi, Elizabeth
Publication:Science News
Date:Jan 11, 1992
Previous Article:Ancient quakes signal future Northwest risk.
Next Article:Planes: larger role in global warming?

Related Articles
Neural networks set sights on visual processing in brain.
High society on the brain.
Backing up 'back prop.' (back propagation in neural networks)
The brain in the machine: biologically inspired computer models renew debates over the nature of thought.
Possible applications of neurocomputing in defense.
A neural network - could it work for you?
Automatic flat dies gain artificial intelligence.
Neural networks for learning verbs.
Polymer dendrites: making tiny connections.
A big silicon brain.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters