Printer Friendly

Artificial intelligence and natural confusion.

Artificial Intelligence and Natural Confusion

The AI prognostication biz has fallen on hard times.

Twenty years ago, MIT maverick Marvin Minsky and other scientists were predicting that research into artificial intelligence (AI) would produce computers with the intelligence of human beings within a few years. These brilliant silicon and metal contraptions would then educate themselves at a tremendous rate, soon possessing such colossal intelligence that they would control all business, the military and the economy.

Minsky and his fellow visionaries acknowledged this revolution could be a mixed blessing. On the up side, everyone would be free to work as little or as much as they wanted. The down side suggested these machines might turn malevolent and out-think and outmaneuver us at every turn. "They might decide to keep us as pets," Minsky would say.

Not that Minsky's future seems so appealing, but after reading what researchers believed back then and comparing that to modern life, you have to look around and ask: "What happened? Just where are these intelligent machines that will do everything for me?"

Few AI researchers today boast as loudly as they did in the past. Some are still willing to make fantastic claims about the wonders AI research will soon bring, and most still envision a bright future for AI, eventually. But if you ask what they hope to achieve over the next decade -- with pluck and luck and lots of money -- you will hear modest goals that don't make the evening news.

"A challenge for the next 10 years is to make a program that can read a chapter in a freshman-level college text and then answer the questions at the back of the chapter," says Raj Reddy of Carnegie Mellon University in Pittsburgh, president of the American Association for Artificial Intelligence. Reading a chapter in a grammar school text might be a more reasonable goal, says Hector Levesque of the University of Toronto.

Even the simpler goal requires great advances in understanding how we learn, how we represent knowledge in the brain and how we process everyday language in a way that makes sense. "We're still searching for the right 'atoms' to describe intelligence," admits William Clancey of the Institute for Research on Learning in Palo Alto, Calif.

At this point, AI programs break down easily in the unusual situations we all encounter every day. It is now very difficult to get a computer to judge immediately that if someone is spooning sand away from a sandpile, the pile will eventually be gone, or that a sink will overflow if too much water gushes from the tap.

This difficulty has produced a whole subfield of AI called commonsense physics to build complex programs enabling computers to pour coffee or navigate through a crowded supermarket -- things a 6-year-old can do with ease. This field is not only interesting because it might give us a clearer idea of how we think, says Kenneth D. Forbus of the University of Illinois in Urbana-Champaign, but "it's also useful for making flexible robots that can walk around ... and not burn your house down when they're making dinner for you."

With current technology, even getting a computer to admit it can't do a task is a challenge. We've spent the good part of a decade just trying to get computers to "back off and throw up their hands in an acceptable way," says Levesque.

In short, we were promised a child prodigy and got an idiot savant.

This is not to disparage the difficult problems being solved and the real gains made in AI research. But the promises the public hears rarely describe the problems being worked on. "We always hear about it when AI succeeds, but we don't hear about AI's failures," says AI critic Hubert Dreyfus of the University of California, Berkeley.

But don't despair. It may turn out the idiot savant is the child who shines most brightly and touches you most directly in the next decade. AI programs called "expert systems" apply expert decision-making rules and a base of knowledge to solve problems like a trained person. In their simplest form they are decision trees that say, "If X, try W. If Y, try Z." More complicated expert systems may use advanced techniques to search through thousands of possible answers to solve a problem in an efficient way.

Although the ultimate dream of pure AI researchers is to replace an expert human with a computer, no experts need fear for their jobs in the near future. Expert systems function best when designed to augment, rather than replace, people's skills. A better name for expert systems might be "clerk systems," says Howard Shrobe of Symbolics, Inc., in Cambridge, Mass. Such systems let an expert use his or her judgment without bogging down in tedious details and getting bored, Shrobe says.

One well-known, successful expert system is Digital Equipment Corp.'s XCON, which checks the presence and organization of complex components in large computer systems that humans design. The company reports that XCON saves it about $25 million a year. Other expert systems help humans quickly decide whether to authorize credit on American Express cards, or assist in the design of buildings or camera lenses.

A number of problems appear ripe for applying these systems, and researchers say this area is where the public is most likely to feel AI's early effects. Others, like Berkeley's Dreyfus, say expert systems are not really AI because they solve problems so mechanically and show no creativity. "Even a thermostat is a type of expert system," he says.

Some in the field show no qualms about dropping the AI label. "To say we're in the business of artificial intelligence is like Boeing saying it's in the business of building artificial birds," says Harry Reinstein, president of Aion Corp., a computer company in Palo Alto, Calif., in the May HIGH TECHNOLOGY BUSINESS. That, perhaps, best sums up what to expect from AI in the near term. Accept expert systems for what they can do, just as we accept the 747 because it can fly fast and carry a lot of people, instead of criticizing the plane because it doesn't sing or perch well on branches.
COPYRIGHT 1988 Science Service, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1988, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Vaughan, Christopher
Publication:Science News
Date:Nov 26, 1988
Previous Article:The brain in the machine: biologically inspired computer models renew debates over the nature of thought.
Next Article:Yoga, diet, exercise melt away plaque.

Related Articles
New technologies emerge in medical AI.
Artificial life: stepping closer to reality.
Artificial Intelligence Frontiers in Statistics: AI and Statistics III.
Artificial intelligence - metaphor or oxymoron?
ABB, BrainTech From Strategic Alliance.
Replacing humans with machines: the insurance industry has begun to leverage artificial intelligence to cut costs and improve efficiency. But...
Our Molecular Future: How Nanotechnology, Robotics, Genetics, and Artificial Intelligence Will Transform Our World.
Last rights: mixed messages on end-of-life morality aren't good medicine for struggling families.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters