Possible applications of neurocomputing in defense.
From Artificial Intelligence to Acquired Wisdom
Despite their phenomenal number-crunching properties, even the super-computers that are capable of solving complex mathematical equations in seconds cannot stand any comparison with the universal computing capacity of the human brain when it comes to performing numerous fundamental functions simultaneously. Conventional computers, for example, cannot rapidly recognize patterns, work from unprepared input from sensors, or anticipate the consequences of situations, even if they make full use of the "knowledge" stored in their databases. This situation, however, might change in the near future with the advent of neural networks (also called neurocomputers or neural systems). Although nobody has yet been able to explain conclusively how the brain functions, a number of scientists and engineers are currently looking at ways of designing electronic structures that mimic naturally intelligent networks, such as the human brain. The successful development and application of neural networks to military and civil uses are bound to change the fabric of society on every conceivable level just as profoundly as did nuclear and genetic technologies. By extrapolating the present level of experiences with neural networks, scientists confidently predict that by the end of the first decade of the coming century, trained - as opposed to programmed - robots will be able to perform multiple tasks and that 30 years later, androids, i.e. humanoid structures, will assist man in labor-intensive tasks. They even anticipate that by the end of the next century neural networks may well be endowed with creative intelligence. A different school of thought, on the other hand, downplays those predictions by claiming that the rather disappointing performance of artificial intelligence (see Armada International April/May 1989), initially much touted, gives every reason to believe that neural networks are heading towards a similar fate because unexpected and as yet unpredictable obstacles are bound to block progress at crucial points.
By their very nature neural networks neatly sidestep the learning/programming problems of artificial intelligence because they cannot be programmed: a neural network must be trained and it can also learn by example (i.e. it can create its own database). Such capabilities have been well known for decades, so it remains somewhat of a mystery why scientist have invested so much effort in the artificial intelligence approach and consider the neural network concept an also-ran. According to Dr. Castelaz of Hughes - one of the leading US authorities on neural networks - there were two main reasons: the lack of the necessary technology in the 1950s and 1960s for implementing large neural networks; and the lack of the necessary digital computer simulations to overcome the problems of analog implementation, since digital systems themselves were in their infancy. Besides, more had to be learned on how the human brain operates before work could continue.
Initial attempts to develop neural networks were based on research in the late 1940s, when scientists in the USA were able to devise plausible models of the function and structure of the human nerve cell. Engineers endeavoured to give these models a structure in mechanical, chemical and/or electrical forms, and by the 1960s considerable progress had been made.
Electro-chemically operated neural systems proved to be able to recognize graphic patterns. In 1968 a machine called Perceptron even showed a learning capability. But then, analog systems began to be replaced by vastly superior digital processors. This created a change in attitude towards computing, resulting eventually in the oft-cited computer revolution. Since artificial intelligence depends on digital processing while neural networks work best in the analog environment, pursuit of the latter avenue of research appeared to be somewhat antiquated. Conventional artificial intelligence had thus won the day, dominating research in the United States and elsewhere. In Japan, the Fifth Computer Generation project was launched, while the EEC-initiated Alvey and ESPRIT programs got slowly underway in Europe. Work on neural networks was kept alive only as an esoteric laboratory curiosity at a number of universities.
Participation of Industry
In the early 1980s, when the limitations of digital-based artificial intelligence became apparent, neural network research began to gather momentum again. Today, numerous governmental agencies, universities and industries engaged in computer and software R & D are once more investigating the possible applications of neural systems. By the middle of the past decade efforts were at last supported by adequate funding from defense and science budgets, and industry began to finance extensive in-house projects. It appears that Japan was the first to recognize the enormous potential of neural networks, and some European sources claim that this country is already working on a second-generation neurocomputer.
Two major research projects on neural networks are currently supported in Europe by the EEC to the tune of 5 million ECUs (European Currency Unit) each. Both are facets of the ESPRIT program. The first project, named Annie, is an exploratory project to investigate potential neural network applications.
Those involved are a number of universities and companies like British Aerospace, Siemens, CETIM (France) and Alpha (Greece). The second European project was christened Pygmalion and centers its efforts on software tools and applications. Project leader is Thomson-CSF, partnered by SEL (Germany), Philips (Netherlands), CSELT (Italy) and universities in Great Britain, France, Greece, Portugal and Spain. This program is aimed at creating a network description language and a software environment leading to a European standard.
European efforts, unfortunately, appear miniscule by comparison with American activities in neural network research, which is primarily fed by military funds derived from SDI and similar advanced programs and projects. These are sponsored by the DARPA (Defense Advanced Research Projects Agency) with substantial funding (unofficially estimated at anything between $500 million and $2 billion over a period of 5 years).
That an advanced stage has been reached already is evidenced by the fact that TRW is offering neural network products on the open market. These systems are apparently intended for experimental use by universities and industry, with the aim of broadening the nation's experience with neural networks. General Dynamics also conducts intensive neural network research, presumably with the aim of perfecting missile guidance systems. Long-term projects for aerospace applications are underway at Hughes, and even Du Pont runs a project aimed at tailoring neural networks to the needs of the chemical industry. IBM and DEC, traditional software and hardware producers, support neural network programs for future military and commercial purposes. NASA is of course deeply involved too, since neural networks lend themselves as coprocessors to many space flight applications. In the space agency's NNETS (Neural Network Environment Transputer System) 40 transputers have been linked to attain an extremely high operating speed.
Neural Systems Technology
Neural systems can be defined as non-programmed, adaptive information processing networks which develop their own algorithms in direct response to their environment. The two main features of neural networks are hidden in this definition. The first is that neural networks cannot be programmed, which implies that they have to be trained. The second is that a trained neural network can adapt itself to external influences, i.e. it can learn by experience. Neurocomputing is thus a radical departure from hitherto employed methods, and neural networks have the potential to become an essential part of tomorrow's self-programming computer. The information processing architecture of a neural network is based on a massive parallel structure, containing numerous simple processing elements - called neurons - interconnected to obtain collective computational capabilities. These interconnections can be fashioned in several ways called network paradigms. About 20 different types of paradigms (a term which essentially means "model of any form") are currently used, but intensive research is being conducted worldwide to develop new and more efficient types.
These silicon neurons are patterned in structure and behaviour after the human brain cell (at least as the biologists believe it is constructed). Each neuron can have any number of inputs but has only one output: this, however, branches out to connect in the form of an input to numerous other neurons i.e. processors. In a typical network, only a few processors are connected directly with the outside environment from which they receive their inputs. Each neuron has essentially the same function and structure. During processing it sums up, or figuratively speaking "weighs" the inputs received and if the sum exceeds a given level it "fires" a signal to the connected processors. The learning process of the multiple neurons is based on the "weights" of the inputs they handle. The firing threshold in each element is progressively modified by both the learning process and the data entering from outside.
This process is implemented by one of the learning laws, of which several are currently available. Scientists have developed models on how the human being absorbs knowledge, i.e. learns. One of the simplest models explains that each time an input results in the firing of a neuron, the weight of this particular type of input is automatically increased. If a particular input does not significantly change the balance within the neuron, that input type's "weight" is decreased. This means that spurious or weak inputs are discarded, while only those that frequently add to the weight's sum are enhanced. Broadly speaking, information in a neural network is processed by a complex interaction of neuron activity and the continuous adjustment of weights that causes it. All information is processed in a perfectly parallel, well-nigh amorphous, manner which does not require the decomposition of the input into single data, as needed for a conventional computer.
Training the System
The self-organizing nature of neural networks is possibly their most interesting feature. As mentioned above they are not programmed, but have to be trained. This process is in fact simple: the operator inserts samples of typical inputs into the network, e.g. a distorted radar or sonar signal, and in parallel presents the system with samples of the expected output, i.e. clear and readable radar or sonar signals. After repeated runs, the neural network develops on its own the algorithms needed to transform the marginal input quality into a usable output. Depending on the complexity of the task a neural network requires some hundred to a thousand runs of this training cycle, which is naturally performed in an automatic repeat mode and can be completed in seconds. The network can of course be re-trained for handling other tasks. It can be readily seen that this self-programming feature radically alters the way conventional computers are traditionally operated.
A neural network is not a computer by itself, but rather a co-processor added to any suitable digital computer system. With proper interfacing it will be able to cooperate with any current software language such as UNIX, DOS and the like. This is the crucial point where neural networks have a distinct advantage over artificial intelligence. An expert who enters his knowledge into an artificial intelligence database has first to master its complicated software language and associated operation techniques. This is not the case with neural networks, provided they work with a standard computer, which can be a cheap commercial PC/AT (the pictures accompanying this article were generated on such a machine). It was found that the best results were achieved by familiarizing experts with the features and limitations of neural networks and by letting them enter their specific knowledge with the usual language and keyboard. Neural networks are in fact ideally suited to work with very large data-bases or data input flows because they very quickly discover repetitions in any data flow. If a comprehensive knowledge data-base already exists, neural networks can conceivably be commanded to extract all the pertinent data and so generate an artificial intelligence expert system without any special programming effort.
Another most exciting application is the ability of neural networks to generate highly complex algorithms which are simply too complicated to be created by human beings in an economically acceptable timeframe. This applies to the defense field, particularly to algorithms required for target acquisition and tracking, sonar evaluation and pattern recognition. The drawback is that the solution offered by the neurocomputer cannot be as exact as the human-written version because, after all, it represents merely an approximation, a near-optimum solution. Whether this approximation is technically acceptable depends on the accuracy or performance required by the user device.
It was discovered during research and application experiments that the neurons could be arranged in different architectures to yield either excellent self-teaching or highly developed optimization properties. For the latter type of architecture it is obvious that all the available parameters will have to be very specific, but it offers considerable advantages if the neural network architecture has been custom-designed. For example, an architecture can be created as part of a radar signal data processor which eliminates clutter by intelligent optimization instead of the customary threshold filtering. A typical application would be the search for features of interest in reconnaissance satellite data.
Merging Neural Nets and Artificial Intelligence
In spite of their formidable, inherent properties neural networks are not the whole answer. There are distinct limitations to what a neural network can do in knowledge processing. The basic problem with neural networks is that they do not output clear-cut information but a complex pattern of signals resulting from the output neuron firing the message. This is where conventional artificial intelligence and neural networks can merge their respective strengths. An artificial intelligence expert system is far too slow to be of any practical use if it has to work with a very large data-base. This can be compensated by the addition of a co-processing neural network. It could be used to scan even the largest data-bases of any conceivable artificial intelligence expert system within microseconds, select via the optimization process the most likely solutions to the problem and feed them back into the artificial intelligence system for further processing. This mimics almost to perfection the problem-solving process of the human brain.
Since the neural network concept also offers a number of interesting advantages at the hardware level over conventional computing, there is intensive research aimed at producing the optimal chip designs. As seen earlier, the parallel architecture of neural networks disperses the information among a large number of neurons. This renders the system highly resistant to damage, since the electrical or mechanical failure of some neurons does not impair the operation of the processor. It may slow it down somewhat but only to an insignificant extent, as the operating speed is vastly higher than that of a conventional computer, where partial loss of memory or failure of its central processing unit leads to a collapse of the system. Another advantage of neural networks is that they lend themselves well to VLSI (Very Large Scale Integrated) chip technology. While even slightly defective VLSIs built for conventional computers have to be discarded, it is relatively unimportant if some neurons in the VLSI chip are inoperative. This tolerance lowers the industrial rejection threshold, and consequently the price of the chip.
Most of the present operational neural systems employ conventional hardware which is readily available at acceptable prices, but this type of hardware does not yet really permit the creation of true neural networks. Currently, familiar digital processors are conventionally programmed to simulate neural networks because this approach is far cheaper than designing and fabricating neural network-dedicated VLSI chips. But it is only a matter of time... The demand for neural network processors has opened up a completely new research and market situation for the semiconductor industry, which is rising to meet the challenge. It can be assumed that the traditional American semi-conductor manufacturers and technical universities are working on the design of embedded neural networks. A typical example is AT + T which has already produced prototypes of a hybrid silicon-based analog/digital network. In Japan, Fujitsu, Nippon Telephone & Telegraph Co. and others are working on neural chips. In the UK University College London is working on neural network chip design procedures and hopes to produce a compiler which will be able to generate neural chips for any given architecture. Siemens is engaged in neural network chip design, as are Thomson-CSF and the Intel group.
Nevertheless, most currently available chips are still primarily digital or hybrid analog/digital devices. The ideal solution for neural systems can only be offered by analog techniques which, according to one authoritative source, can "reach 100 000 times the efficiency of digital computing" if employed in neural networks. But even then, such cost-effective neural networks may never reach the switching speed and efficiency of the now proposed optical networks which transmit data by laser, store them in holographic form and process them by optical switches. But this seems to be only the dawn of yet another revolution. A team at the University of Stuttgart, is for example working on molecular electronics where the customary anorganic silicon is replaced by organic, in part super-conducting, substances. If successful, this line of research may result in chips which are easy and cheap to produce and vastly faster than conventional silicon devices. From this point onwards the step towards an organic/chemical system (a bionic structure which not only mimics but also resembles the human brain both in its operating mode and architecture) would be conceivable. Concern is sure to be voiced sooner or later by fundamentalist philosophers, but just as tools are extensions and force multipliers of the human arm and hand, neurocomputers are bound eventually to become extensions of the human mind. This aspect should be borne in mind while looking at the attached Table of the potential applications of neuro-computing for military and commercial purposes.
As explained above, currently known neural network architectures can be divided into two basic groups: learners (i.e. adaptive) and optimisers. Their properties dictate their application. Among the most obvious uses of self-organizing adaptive neural networks are for realtime pattern-recognition tasks. The detection of target signatures buried in clutter and noise is possible with conventional computer systems, but due to the long search cycles it cannot be done in real time. The assistance of a neurocomputer will speed up the process no end and thus provide the few extra seconds needed for defense against an incoming missile, for example. The same process can enhance the effectiveness of ESM and ECM far beyond presently conceived systems. Applied to assist the sensor-image evaluation suites of robotic vehicles, for which they are now clearly a prerequisite, neurocomputers can offer a cross-country mobility which far outperforms current models, since it can quickly and automatically be taught to select the best path across any terrain. As an experiment JPL (Jet Propulsion Laboratory) in the USA has designed an embedded adaptive neural network chip to control a robotic arm which may find an application in automated space vehicles. Neural networks can be employed for target identification and tracking, weapon allocation, missile guidance, intelligence-gathering and data-merging functions. They can greatly improve the operation of man-machine interfaces or handle resource allocation on the logistic level.
An excellent example of the capability of a learning neural network structure is an experimental simulation system designed by Hughes and currently under test. It has simulated the missile defense of a high-value target against incoming threats. The neural network initially "observed and registered" the actions of a human operator and learnt the necessary missile-launching procedures and correct launch time of an air defense battery. The command of the battery was turned over to the neurocomputer after it had observed 40 exercises in changing scenarios. It achieved a consistent intercept success rate of 85 to 90% against multiple targets in rapidly changing scenarios. The success rate against single targets averaged more than 95%. It should be noted, however, that the above results were not achieved with dedicated chips and that considerable improvements may be expected when a specifically designed VLSI chip, now under development at Hughes, becomes available.
Also at Hughes, under the direction of Dr. Castelaz, an experiment in optimization processing achieved really striking results in multi-sensor passive tracking initiation in an air defense scenario. The problem involved the rapid location of a small number of true targets among a very large number of false ones. The difficulty of solving this problem increases exponentially with the number of targets. The impact of this fact on conventional processing is significant. For example, in a typical scenario involving 15 true targets hidden among numerous ghosts, a digital VAX 11/780 computer can solve the problem within 10 seconds. But if the true targets number more than 40, the computing time required is estimated at years. In a simulated environment, and employing a neural network, Hughes scientists arrived at an average processing time of 15 microseconds for the discovery of 36 true targets among a total of thousands of ghosts. This represents six orders of magnitude faster processing than with conventional computer systems. Such experiments and their results are still preliminary but are already indicative of neural networks' capabilities.
The progress achieved so far has doubtless reached a high level, but numerous problems remain to be solved for advancing neural networks from the experimental to the operational stage in defense or civil systems. It was discovered for example that with their increasing complexity neural networks tend to "forget" what they have learnt, a phenomenon which has yet to be fully explained. At least another decade will pass before neurocomputing processors have reached the maturity required for functioning as clever assistants in standard computers and as essential subsystems in the robots of the future.
PHOTO : This amazing computer image, produced by Hughes, clearly shows the output of a neural
PHOTO : network hunting for true targets among thousands of ghost signals, as might be produced by
PHOTO : decoys during an intercontinental ballistic missile attack.
PHOTO : Submarine navigation and weapon control is a typical area in which neurocomputers and
PHOTO : artificial intelligence could be put to work together.
PHOTO : Neural network processors like this Hughes prototype are relatively small and can be
PHOTO : fitted easily to a standard extension board of a conventional computer.
PHOTO : Neural networks can be used as "terrain classifiers" for civil and military purposes. In
PHOTO : the latter case, they could be used in missiles for TERCOM navigation.
|Printer friendly Cite/link Email Feedback|
|Date:||Feb 1, 1990|
|Previous Article:||Military and civil underground shelters.|
|Next Article:||Sextant Avionique: comprehensive integration.|