Printer Friendly

Artificial intelligence the future of the computer.

To me, artificial intelligence (AI) is the enterprise of trying to make machines smart. It is a strange field, for it is defined by objectives rather than by a set of techniques.

AI is now populated by two kinds of people: those who want to make machines more useful, and those who want to understand intelligence. This means that at one end of the spectrum, you will find people who want to make a lot of money. At the other end are people whom some may consider lunatic-fringe psychologists and computer scientists.

The things that brings all these people together is an attempt to develop machines that have human-like intelligence. Because there is an almost religious zeal in this quest, AI will steal any kind of technology that will serve the objectives of understanding intelligence better and making machines smarter.

Figure 1 illustrates the kinds of things we do in AI, as well as the spectrum of interest in a typical AI laboratory at a major university or research center. Notice that robotics--manipulation and vision and sensing--is an important part of the field, but it is not by any means all of it.

We are involved in things that have to do with automatic analysis and design--things that some people call expert (advisory) systems. We are also involved, however, in natural language, trying to make machines learn, speak, and do almost anything else you can think of that entails mental faculties--that is, replicating what our brains do for us.

The history of AI goes back to the 1840s, when Babbage was fooling with the first computer. People were are that time already speculating about whether that machine could be intelligent in any meaningful way.

By 1965, all the basic ideas in today's startup companies were already in place. Certainly AI's beginning was occasioned by the fact that by 1960, programs were doing impressive things.

One peculiar characteristics of AI is the fact that intelligence seems to disappear as soon as you understand how it works. That is true even when you think about other people. Sometimes they do things that at first seem very smart, but turn out to seem not so smart once you understand how they reason step by step.

One of the few requisites of thinking something is intelligent is that you really don't understand it very well. That is one of the factors that tend to create a cloud over our field. We AI scientists do not help; once AI solves a complex problem, we tend to indict the solution as trivial.

At this point, it would be helpful to clarify the state of the art in AI and explain what is happening in the field. Figure 2 shows a model of what happens in a scientific or technological field. This is the Stiction Theory of the relationship between what you know and applications of that knowledge.

This may be a good model for many technical fields, but I claim it is a bad model for AI. The trouble with the model in Figure 2 is that it does not adequately capture what I call the Staircase Phenomenon, one of the most important concepts for you to learn. Figure 3 shows a model for the Staircase Phenomenon.

This model pertains to what I see as the relationship between what we know in AI and the number of applications we see generated by what we know. Right now, the field as a whole is coming up over the first riser in the staircase.

For 10 to 15 years, nobody felt that anything in AI could possibly have commercial value. Then a few things were done that show AI can have commercial value, and suddenly everybody is getting very excited about cloning that kind of idea.

This in some sense represents the Visi-Calc phenomenon. A program of this nature could have been created a long time ago, but nobody did it until someone showed that it could be done. Now that it has been accomplished, everybody is copying the VisiCalc idea.

So, in some parts of AI, we are coming over the first riser, and we do not know how high it is going to go. But AI is a broad field, and if you review the various things in which AI is involved, you see that the various parts of AI are in different places on the staircase.

Expert systems

Rule-based expert systems are the cause of most of the hype about AI these days. The reason for this is that these rule-based systems are unbelievably simple.

What is an expert system, and what is a rule-based expert system?

An expert system is simply one that behaves like a human expert. The system knows something very special, at times something that a lot of people are willing to pay for.

The rule-based expert system is based on a particularly simple kind of technology called condition-action rules. And what is a condition-action rule?

If you ever bagged groceries when you were a kid, you were behaving as though you were a rule-based expert system. In fact, I introduce the idea of rule-based expert systems to my class at MIT through a computer program called Bagger. This program acts like a person in a grocery store, doing the right thing with your groceries.

What do you do when bagging groceries? The most important thing is to put the heavy objects at the bottom of the bag. For example, put bottles beneath all the other heavy things in the bag. You put the light things at the top of the bag; for instance, put the bread there because it is crushable.

If you are an entrepreneur in spirit, and you see someone come along with a lot of potato chips and nothing to drink, you might even say, "What about buying some Pepsi, because this salty stuff is going to make you thirsty."

So as you see, everything about bagging can in some sense be cast in the form of simple rules: IF this, this, that, and the other thing are true, THEN do something in particular. If you were to use a kind of shorthand representation in English, you might use a collection of rules like this:

Rule B1--IF there are potato chips, and there is no soft drink, THEN suggest Pepsi.

Rule B2--If there is a large item, and there is a bag, and the large item is a bottle, THEN insert the bottle in the bag.

The first rule is the entrepreneurial rule that says, get this guy to buy something to drink. The second rule is called a synthesis-oriented, rule-based expert system, because in some abstract sense it is designing something. Very few of today's expert systems are oriented toward design; however, almost all are oriented toward analysis.

Figure 4 shows a toy version of an analysis program. The objective is to determine the genus and species of the animal.

You can imagine a simple set of rules that would look at various properties of this animal. Here are some possible rules:

IF is is a mammal, and it chews cud, THEN it is an ungulate.

IF it gives milk, THEN it is a mammal.

IF it has hair, and it is a mammal . . . and so on.

This is a toy system, yet it can be related to systems of enormous value only by the number of rules involved. The toy system may have 15 rules involved; a utilitarian system may have several thousan.

One of the most successful commercial expert systems is XCON, sometimes called R1, a system that configures computers. Designed by programmers at Digital Equipment Corp, Maynard, MA, XCON, tackles the problem of taking an order for a DEC VAX computer and deciding which cables to include, where to put the power supply, how to load memory into cabinets, and other tasks.

As you can see, XCON is a lot like Bagger. Instead of bags and groceries, you are dealing with cabinets and memory boards, but XCON embodies exactly the same idea and exactly the same technology as Bagger. The big difference is that XCON has between 2500 and 3000 rules, instead of just a few for bagging groceries.

Another example--one of the most famous because it is the oldest success--is the MYCIN program for diagnosing certain kinds of infectious bacterial diseases. Using this program, a doctor can look at a group of characteristics and decide what disease the patient has.

First, you look at the characteristics:

Age? 55

Positive cultures? Yes

What type of inflection? Primary bacteremia

When did the symptoms appear? 7/5/80

Rod or coccus or neither? Rod

Gramstain? Negative

This is like the animal identification idea. In fact, it is the same thing, but these are diseases instead of animals, and instead of 15 rules, MYCIN has a few hundred. Example:

IF the inflection type is primary bacteremia, and the suspected entry point is the gastrointestinal tract, and the site of the culture is one of the sterile sites, THEN there is evidence that the organisms is bacterial.

Naturally the question arises, are those really-based systems? Yes, they all fit the mold; they are all little collections of IF-THEN rules. MYCIN and XCON may deal with matters that look like Greek to you, but they do their job when put together in enough combinations of rules.

At the same time, though, these so-called expert systems are idiot savants. They are savants because they can do some kinds of things beautifully. They are idiots because they do not do any of the following:

* Degrade gracefully

* Take different points of view

* Break rules

* Build models

* Learn from experience

In expert system technology, we are just coming over the first riser in the staircase. The AI research laboratories are worrying about making expert systems that can degrade gracefully, take different points of view, and so on.

One last point about expert systems: There is more than one kind. There is the IF-THEN kind, but there are also other kinds:

* Formal logic

* Constraint propagation

* Means-ends analysis

* Causal relationship models

The IF-THEN system is the only one of the five kinds that lends itself to rule-based systems.

Language systems

We have at least one commercial success in natural language: the Intellect system, marketed by Artificial Intelligence Corp. In this system, you can tap your management information database in plain English. You ask it questions, and in doing so, you can be brief, use pronouns, and do almost anything else you want.

That's strange, because AI has not really progressed very far in the area of natural language. Oddly enough, though, the technology is good enough to make a very robust system.

But how is this power possible, if natural language has not progressed very far? A good illustration came in an experiment conducted at MIT about 10 or 15 years ago. We were interested in learning how much English we had to understand in order to deal with databases, especially those oriented toward business.

A group of Sloan School summer students were bought in an told that the natural language problem had been solved in full generality. They were also told that as an exercise, they would use a computer-based system to solve a case-study problem faced by a manufacturer of batteries.

The students quickly became facile at working with this system. They typed in their inquiries; then, after a short delay, responses were printed out rapidly. Of course, they didn't realize that there was a graduate student on the other side of the wall, typing responses into a Teletype. The responses were buffered and typed out to the user at computer-like speed.

How many different words do you think these students used in their inquiries: 1000, 1500, 500? The total was, in fact, only 300, and the grammatical complexity was relatively slight.

The point is, we found that there are numerous important task domains that can be handled very well with a limited vocabulary and simple grammar. This means the natural language technology of today can be made quite adequate for some types of users. In fact, the technology cn produce some exciting results.

Today's practical natural language technology is based on the idea that it is possible to make graphs that represent the structure of the questions one may want to ask. It is a kind of technology that blends semantic categories, and things like nouns and verbs, into a graph that a sentence pushes its way through as it is analyzed.

Besides Intellect, another example of this approach is the Lifer system developed for the US Navy. Lifer has been tailored to handle inquiries about ships, such as, "Who is the captain of the Spruance," and, "What is the class of the Minsk?" These sentences (Figure 5) fight their way through a semantic network and end up at terminal nodes. There, programs are waiting to make appropriate database inquiries.

The second type of natural language technology is quite a bit different from the graph type. The second type involves what we sometimes call frames.

What is a frame? An explanation may be found in the short newspaper story shown in Figure 6.

Reading that story, what do you know about what has happened? Which kinds of questions would you aks about an earthquake?

The questions are so predictable, so stylized, that you can just about write all of them down. If you are going to give me a story about an earthquake, I will want to know what fault is involved, when it happened, how many people were killed, and the quake's magnitude on the Richter scale.

There are so many signals in a typical earthquake story, that it is easy to imagine a system that would be able to take the story and ferret out everything you need to know about the quake and fill up those slots.

You could do it by looking for just the kind of number involved. If it is a number between 5 and 10, and it has a decimal point, it is probably the magnitude of the quake on the Richter scale. If it has a dollar sign in front of it, the number represents the damage. The sentence with the word fault in it--that is the name of the fault involved.

Why is this so useful? In theory, it is useful because if you can do this kind of analysis, then you can fill the summary pattern using the information that you ferreted out. Figure 7 shows a restatement of the original story; it is compiled simply by filling in a little skeleton of the report. The skeleton is tailored for reports on earthquakes.

We have found, however, that this kind of natural language is not as robust as we would like. We certainly have not advanced as far as we have in, say, robotics. With reference to Figure 3: natural language is on the flat part just over the first riser of the staircase. It is going to take an accumulation of more knowledge before we have another big jump in our ability to handle natural language.

Right now, we can deal with questions and get answers from databases. Before we can properly read newspaper stories, though, or really understand dialogue and do information retrieval from abstracts, we will have to accumulate much more knowledge.

Learning systems

There is another type of AI system, one that learns. This is something that is still in the research labs and is very exciting to me.

Let's look at an example involving toy trains. With reference to Figure 8: Give me characteristics of the five top trains that distinguish them from the bottom five. You may say that there are short cars, long cars, polygonal loads, and triangular loads. You may talk about the numbers of wheels and the numbers of cars.

Did you come up with an answer? Well, one answer is that each of the top five trains has a short car with a closed top. Another characteristics is that each train in the top set contains a car with a triangular load and a car following with a polygonal load.

What is the business opportunity in toy trains? Hardly any, but the same program can tackle this problem:

There are lots of diseases that affect soybeans. About 20 diseases are serious, ranging from things that sound awful, like bacterial pustule, to things that don't sound so bad, like brown spot. Now, suppose that we give the trains program a set of 400 or 500 samples of plant disease descriptions, and then can characterize each disease with a set of rules that look just like expert-system rules. An expert system could then diagnose soybean diseases.

The interesting and important part of this experiment is that the set of rules produced by the trains program was a little more effective than that produced by plant pathologists. I think there are tremendous opportunities in doing things with a program that can learn to identify things, diseases, machine problems, or what have you.

AI in robotics

On the staircase of technological progress represented in Figure 3, robotics is near the bottom of the second riser. You can see the relatively advanced status of robotics in the fact that the binary vision systems developed in the 1960s have now been solidly incorporated into commercial products.

Therefore, we now know much about the limits of these systems in terms of reachable activities. Further, we've been doing enough research that I fell confident there is going to be another jump on the robotics curve within the next five years.

This in some sense places robotics at a leading edge of AI. Unfortunately, the progress has been obscured until recently. The obscurity was caused by the fact that the US manufacturers who could benefit most from robots were in a sorry economic state, which limited their ability to exploit the new technology.

What is an intelligent robot? Essentially, it is a system that flexibly connects perception or sensing to action.

Human beings are examples of intelligent robots, for the following reasons. First, we can see, and we can feel forces, so we can hope with uncertain positions and changing environments.

Second, we have graceful arms and grippers capable of grasping all sorts of objects. And third, we think about what we do. We note and avoid unexpected obstacles, select tools, design jigs, and place sensors. In addition, we plan how to fit things together, succeeding even when the geometries are awkward and the fits tight. And we recover from errors and accidents.

In contrast, most of today's industrial robots are clumsy and stupid. For the most part, they cannot see, feel, move gracefully, or grasp flexibly, and they cannot think at all. Most robots in use now move repetitively through boring sequences, gripping, welding, or spraying paint at predetermined times, almost completely uninformed about what is going on around them in the factory.

Of course, practical robots need not necessarily resemble people. After all, they are built of different, often superior materials, and they need not perform such a wide range of tasks. Many robocists believe that there are numerous tasks that defy automation with anything short of sensing, reasoning, dextrous, closed-loop robots--machines having human-like abilities if not human-like appearance.

As a result, an increasing number of major corporations are making bold moves. For a while, the general pace was slow in the robot-using industries, and outside of Japan there was little rush to accept and exploit the technology produced by AI.

Now the picture is changing. Many small companies are growing rapidly by supplying industry with turnkey products in which machine vision is a component that multiples productivity. And large companies are establishing research groups with intensive development efforts underway.

Where will this new wave of automation go? How far, and how fast? While there is little agreement in the industry, questions such as the following are being addressed:

* Why is it relatively easy to build humanless parts-fabrication factories, yet relatively hard to build humanless device-assembly factories?

* What are the industrial tasks that require human-like sensing, reasoning, and dexterity? It is better to eliminate those tasks by redesigning factories and products from scratch?

* What can be done by exploiting special lighting arrangements for vision? How far can we go with the simple vision systemss that count each pixel (visual point) as totally black or totally white, with no shades of gray?

* Is the robot itself important? Can we improve productivity with robots alone, or must we think instead about improving entire manufacturing systems?

Then there is the question of money. Are the venture capitalists ready for AI? If so, how long will their readiness last? Is their current interest just a passing fad?

Will the commercialization of AI be driven by need-pull or technology-push? Is AI becoming commercialized because there are problems that need new solutions, or because there is neglected technology lying around, waiting for eager entrepreneurs to make use of it? What sort of progress will there be?

In my opinion, the correct attitude about AI today is one of restrained exuberance.
COPYRIGHT 1985 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1985 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Winston, Patrick H.
Publication:Tooling & Production
Date:Sep 1, 1985
Previous Article:Users of mechanical CAE to top 48,000 by 1989.
Next Article:Bar codes in manufacturing - part 2, uses on the shop floor.

Related Articles
New technologies emerge in medical AI.
Artificial intelligence and natural confusion.
The brave new world of artificial intelligence.
Software engineering for artificial intelligence projects.
Technotopia and the death of nature. (Clones, Supercomputers and Robots).
Artificial Intelligence for Computer Games.
Robots Unlimited.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters