Printer Friendly

Factory automation in America: issues, stumbling blocks, speculations.

American manufacturers are under tremendous pressure from successive waves of low-cost offshore producers. The very products we plan on using to automate our facilities also are under offshore attack.

Management largely has ignored the importance of manufacturing efficiencies over the last 20 years. We've let our plants get older, less reliable, less competitive. This recently has lead many companies to spend large sums on piece-meal automation. Other companies have made an art out of developing myopic ROIs.

The influence of corporate data processing hasn't helped manufacturing either. DP's misguided perception of production is all that's required is to have the right amount of material in the right place at the right time. The real payback on using flexible factory electronics and controls is to have the right data in the right place at the right time. This allows management to have the right tools and resources at the right time. The goal is to achieve efficient, unbuffered, continuous production.

In 1950, 1960, and even 1975, process engineers couldn't be blamed because there had not been a wide variety of intelligent, networked controls for all possible machines. In 1985, that excuse is gone. I don't know of a single processing step that could not be reasonably equipped with electronic controls allowing for connection to a network to improve uptime, maintenance, and quality.

In most US corporations, critical decision-support data doesn't move efficiently from the bottom to the top (and vice versa). American manufacturers must utilize systems that allow direct connection between people making production decisions and people carrying them out.

Aggressive US firms are looking for ways, not simply to match their global competitors' productivity, but to beat it. Information handling is their strongest trump card in that game.

For years, limitations of computer technology, and its relatively high cost, restricted its use to the office. That is changing dramatically. Even more dramatic changes are ahead as the price of computer power drops and system reliability increases. Computers in every form, from microcircuitry embedded in a switch to the largest systems running simulations on real-time data, are critically important tools for manufacturers.

More than computers

Many office functions of even marginally competitive manufacturing companies are already highly computerized: order entry systems, CAD, CAM, scheduling, shipping, etc. One major problem is getting all of these different systems, usually built on different brands and models of computer hardware, to work as an integrated system. It isn't easy.

To a certain extent, computer manufacturers and software writers can say that, when it comes to CAD/CAM, they've done the easy part. They have tackled the problem from the angle of making drawings, not parts. The difficulty is neither CAD nor CAM, but the interface between them. The solution requires more than developments in computer technology.

In the first place, true CAD/CAM requires a change in the way manufacturers do business. It requires much closer coordination between product-design and process-design teams; it requires deliberate, comprehensive effort to design for manufacturability. In many cases, development of true CAD/CAM requires simplification of both the variations of part designs and manufacturing processes.

In a recent study by Harbor, we talked with users who had systems that allowed specifying performance characteristics and general dimension constraints for a part. Then the system not only designed the part, but also generated NC code to produce it. Such systems are fully automated from specification through part production.

These systems require common databases built around the concept of part families. Ability to move quickly through the design cycle to parts production is critical for companies wishing to compete in the future manufacturing arena.

Common databases are the key. But, technological developments may be less crucial than changes in the way companies approach the problem.

MAP--the backbone network

MAP began as the General Motors inhouse effort to establish a "manufacturing automation protocol" for communications in their plants. GM still is involved in developing a set of precise standards for communications, but the effort they initiated has become much broader. GM realized that they had both a need and an opportunity to bring other manufacturers into the battle for industrial-communications standards.

MAP, in the generic sense (as opposed to GM's own version), isn't so much a standard as it is an idea. The goal is ability to have information from any device at any point in the plant available to any other device that needs it, regardless of location, type, etc. MAP is intended to facilitate this process by producing standards for communications at the hardware and software levels.

What MAP has produced in the way of standards are embodied in IEEE 802.4, which offers a limited choice of standard implementations of both baseband and broadband CATV-based communications systems. This isn't a standard in the sense of something fixed, something guaranteeing interconnection and interoperability to every complying device. That would miss the achievement's significance.

What GM, Dupont, Boeing, and many other users realized was that computer and controls vendors would not arrive at a solution to the industrial-communications problem soon enough to be of any help. MAP advocates set out to force vendors to adopt a new perspective.

The IEEE 802.4 standards offer a narrow range of choices for hardware, and the first part of the overall software, required for a full factory-communications system. This narrowing of choices makes it possible for users and vendors alike to make decisions about which communications hardware approach to take.

Users thus can install the hardware they need today and be relatively certain that it will meet their needs years from now, regardless of how software in the upper layers of the communications scheme changes. For vendors, it makes further development possible, with a reasonable hope of recovering the R&D investment.

The decision to embody MAP in a set of standard choices, rather than as a single standard, also reflects the fact that different situations have different communications needs. Broadband MAP certainly will become the backbone communications utility for the plant as a whole. Baseband MAP, and real-time versions based on the ISA Proway effort, will be applied to areas where more flexibility or greater speed are required, and where broadband features don't justify the cost.

Vendors of industrial computers, software, networks, etc, that insist on closed-architecture solutions are heading for trouble. If IBM no longer thinks they can field closed architectures, then who can?

MAP does not mean, however, that manufacturers won't use proprietary networks. As long as the proprietary network permits for linking to the plant-wide network, there may be no reason to install MAP at lower network layers.

Evolving vendors and users

Vendors of computer, communications, and control products have grown up in separate environments. And they carry rather rigid assumptions into other fields.

Process-control vendors, for instance, traditionally only talked to process-control companies about process-control applications. Discrete-parts and assembly controls vendors viewed their world as separate from that of process-control vendors.

This separation of vendors and users along traditional lines is breaking down. In another study by Harbor, we found many users in traditional continuous-process companies with applications usually associated with discrete-parts manufacturing. Likewise, in discrete-parts manufacturing we found many instances of continuous processing.

Each manufacturing-control faction has much to learn from the other. Process-control people clearly understand many of the problems manufacturers of discrete parts are just beginning to face. For example, process-control engineers in the chemical industries understand the notions of balanced material flow and production. They always have been forced to view their facilities as whole entities, which is something discrete-parts manufacturers must learn.

As the market for industrial-computer and communications products grows, many changes are happening. One of the most important is the trend toward more powerful universal "base" hardware. The net impact of this soon will realign the existing vendor community. Users can expect fewer brands and an eventual shakeout in the industry.

Presently, there are many vendors making a variety of controls and related products; however, as products converge, the number of companies that can profitably produce a particular version will decrease. More and more control products will be built on standard base hardware, supplied by a small number of vendors.

Other vendors will move aggressively to supply wide ranges of products through a combination of private labeling and their own manufacturing. These vendors will carry broad families of products geared to particular markets.

The whole deal

Applying computer power to the business of manufacturing quickly leads to a new vision of the whole process of manufacturing. The old view took it as a given that if you optimized each part of the process, you could put the parts back together and get an optimized whole process. The more computers are applied to our factories, the clearer it becomes that this isn't true.

The whole factory is what must be optimized, even when that means certain parts are suboptimized. This idea is common in systems theory, but runs counter to accepted notions of machine utilization and other measures of plant or department productivity.

In most larger compnies the deck is stacked heavily against such a broad view. The consequences of investing in islands of automation without looking at the impact on the entire facility actually can reduce ovrall productivity because of the strain on other parts of the factory.

The most important point is that you can't achieve an advanced manufacturing facility piecemeal. The reality for most forward-thinking manufacturing executives is that they must build greenfield plants, then close the old ones that can't compete. The dilemma in America is that few firms have the financial resources to build facilities from the ground up, yet most have no choice because they have held on to their plants for so long. (An exception is GM's Dayton Fridgedaire plant, which was gutted and retrofitted by Chevy truck engineers. The facility was, in essence, a greenfield empty shell.)

One important trend acting against the piecemeal aproach to automation is the growing appreciation of manufacturing as a strategic weapon. As cost and risk of investment in automation grows, it becomes a primary concern of top management. When lower-level managers and engineers sense support from above, they are more likely to speak out and push hard for changes in their company's approach.

New approaches to factory scheduling

Just-in-time is not merely an inventory-reduction scheme. It is a way to simplify the entire manufacturing process, and has the following goals: zero set-up time, economic production lots of one, and straight-line material flow with no WIP buffers and no contingency for downtime or other dislocations. This means if a machine or any part of the process fails, it must be repaired immediately or the whole plant must shutdown until it comes back on-line.

This puts tremendous demands on both the reliability and flexibility of production systems. It also simplifies scheduling by putting the entire facility on a production-for-demand (pull) schedule rather than a production-to-plan (push) schedule.

MRP approaches to scheduling push through the production facility. The problem is simply that the computer power required to keep an MRP-type schedule updated in realtime is immense.

Typically, an MRP system running for an entire plant either has current data to work on, or has a complte plan--it never has a complete plan based on current data. Changes in the real situation cannot be reflected in the plan generated by the plant-wide MRP system fast enough to be usable.

If you examine all players in the market for planning and scheduling systems, the first thing that becomes obvious is that they don't understand facility-wide information flow, and in many cases do not care about the production prcoeses they are trying to capture and manage information from. Many MRP marketers grew up in MIS, while other more plant-floor-oriented developers have looked at only pieces of the problem.

The single vendor group that best understands the problem dynamics across facilities, as well as across most manufacturing disciplines, are the material-handling suppliers, particularly those that have developed sophisticated products such as automated-guided-vehicle systems. Many of these suppliers were the first to realize the need for operations simulation in the design phase of installing a complex material-handling scheme. It became apparent that users wanted to purchase ability to achieve critical-material movement goals; not conveyors so wide, so high, or so deep.

It turned out that before a vendor built the system, paper analysis couldn't accurately discern whether a particular configuration could move a certain level of material through a facility. Consequently, companies like Jervis Webb, Eaton Kenway, Litton UHS, etc, became experts at material-flow simulation. Since that time, many of the engineers who developed the concepts and capabilities have left to start their own software businesses, e.g., Autosimulation and Systecon. These people understood that they had technology relevant to JIT management.

MRP assumes that a company/site has infinite production capacity and asks what materials will be needed by when. Simulation offers a different tack. If a user can run it on a computer fast enough (which has become progressively easier with larger memories and faster data buses) and iterate (i.e., when starting with certain production assumptions, what will happen?), then simulation ultimately becomes real-time closed-loop control of the plant based on knowing what will happen for all combinations before starting. Outside the simulation talent embedded within user companies, the material-handling community is the only vendor group that grasps this approach.

This concept of an alternate way to schedule production may well destroy MRP as it is known today. This may happen fast enough to take the current MRP product suppliers by surprise. Consider all the plant-floor devices that will be networked in the future, creating the validated data necessary to run scheduling problems in near real time and play virtually infinite "what if" games.

There is a potentially good alliance foundation between database specialists, such as a Cincom or Cullinet, and the material-handling experts that understand this capability so well. You can expect products such as a ruggedized personal computer supplied with a third party 2MB ROM board to become a MAP-network attachable device that is functionally designed to handle real-time scheduling problems and send results back to a department manager.

Running MRP in real time for every workcenter and plugging all of them into a high-speed network might provide the basis for the nexte generation of real-time-based planning and scheduling functions. There are major test sites for this type of product/network shceme installed today. One company has a three-stage system, where one microcomputer runs through initial requirements at one site, telephone links to the next site and runs through the consequences, then links to a third site. The system is rerun for each day's production schedule.

Obviously, such a fully automated factory will require 100-percent data capture. If the goal is integration, the only way to ensure it is to make necessary data available to the overall computer system. If significant events occur that can't be discovered by the computer system, then applications the system is running will always be basing calculations on bad data. "Garbage in, garbage out" is an iron rule in computing.

One aspect of the issue is how to ensure that the data that is collected is, in fact, good. In general, automatic data capture (e.G., from a limit switch, or automatic gaging or inspection system) is preferred. Semiautomatic systems (e.g., hand-held wands for reading bar codes) are the second best approach. Data entered by hand at a keyboard, or on a touch screen or control panel, is prone to error.

Control trends

There are many different types of factory controllers. Programmable controllers and personal computers are the most flexible and fill the widest range of uses. Fundamentally, CNCs, digital loop controllers, personal computers, programmable controllers, and most other digital-control devices are nearly identical. They are all computers. They all use basic components that are almost interchangeable. Developments in computer technology, and in economics of the computer and controller businesses, are driving all control devices to a common hardware base.

The existence of such a universal machine, or family of machines, will greatly simplify the problems users face in attempting to get all the different devices in their plants to work together.

Further, the number of personal computers used in industrial environments is growing. I think they will continue to take on a variety of new attributes, making the base product considerably more suitable for the plant floor. These attributes include environmental packaging, more computer horsepower, and expanded I/O.

I've also noticed growing interest in area and cell control. Before examining this, some working definitions of control levels in a manufacturing plant are needed. The four major levels are--unit, cell, area, and facility wide.

Unit level represents the lowest material-flow element/event that has its own controller. The cell level represents two or more interconnected units that act as a coordinated whole. An example is an FMS.

The area level is a group of units and/or cells working together in the same physical area and/or on related tasks, but without the requirement for real time coordination that would make them a cell. Both cells and area typically involve a variety of equipment brands. The area-control function will likely evolve into one of the most critical applications of industrial computers.

Differing control technology brand proliferation is largely because of process conditions, production-facility preferences for equipment, and differing price considerations. All of these make the area computer/controller a keybuilding block in the plant-floor information structure. The area level will be the focal point for many advances in the standardization of system and equipment interfaces.

What will be needed is a bridge between dedicated process- and machine-control systems and a higher level of production planning and control (I.e., facility wide). The area computer/controller will fill this need.

Both cell and area computers products will have to provide a variety of man/machine, man/process interfaces, allowing for different applications and including options for being attached to higher-level local-area networks. Machine applications will determine the I/O and real-time requirements. Real-time coordination for cell controllers, however, combined with the variety of devices they must control, makes the software for these products much more complex than the requirements at the area level.

There are problems with the term area controller. It was originally conceived during the Wright-Patterson ICAM project, but has since come to mean different things to different people. Interpretation, as it moves through vendor product strategies and user control schemes, varies considerably.

If a user in a classical process-control industry has an area controller, he is writing all I/O directly to the controller. A fast real-time I/O-rich computer is therefore required, but that architecture is quickly becoming obsolete.

If a user moves to a fully distributed system (i.e., unit controllers networked to an area controller, or unit controllers tied to a cell controller tied to an area controller), there basically would be no I/O on the area controller other than communications and peripherals. It would be an area process manager that provides supervisory capabilities over all cell and/or unit controllers.

In continuous process control, technology evolved from pneumatic and electrical controls to simulation of those controls in a process/unit controller. I think the next step will be a universal unit controller. In discrete processing, the evolutionary path began with electromechanical controls, rising to simulation via programmable controllers. Eventually this also will become a universal base product.

The concept of big real-time computers erforming real-time control is obsolete. What I am describing is vertical distribution of function. With layered networking, many of the concepts of redundancy and nonstop computing are no longer needed. If a unit controller fails, the cell or area controller can step down to the unit level, taking over the task without losing production.

An area controller can be a manager/coordinator of multiple-cell controllers or, depending on the application, multiple-unit controllers. There are functions within an area that are hierarchical, and thee are horizontal functions. An example of horizontal functions is 50 die-casting machines in a department where there is not cell controller supervising them.

The difference between a cell controller and an area controller is best defined through the application. For example, compare a programmable controller running a conveyor, versus a micro-computer running a laser gage, a couple of CNCs, and a robot--the microcomputer ties all these controls together as if they were designed as an integrated cell.

But, if you have 50 die-casting machines tied together on an Allen-Bradley Data Highway and then into an IBM personal computer as the department-manager workstation, it doesn't look at alllike a cell controller in terms of functions performed. Real-time load on the cell controller is 50 times higher than on the personal computer sitting on the data highway doing upload, download, and production monitoring.

In many ways, current personal computers can't be adequate cell controllers, even though some are capable of employing multiple CPU's, and therefore potentially can be configured for higher performance applications. For today's projects you would most likely use a DEC VAX.

I expect to see fairly universal standard products in a range of sizes for unit control. For cell control, you will see some universal standard products, but these will be primarily for metalcutting. Beyond that there will be a whole range of industrial-grade computers that will be configured to accomplish a variety of tasks.

In general, the individuals I have interviewed predict that such products will evolve into an industrial-grade computer (including personal computer, workstation, etc) that will typically have anywhere from 5 to 10 smart serial I/O channels and a small integrated programmable controller to provide interfacing to nonserial communications compatible devices. The specific configuration of this product will have to be such that it can be varied for each industry, manufacturing process, and local-plant environment conditions (including mounting), etc.

Artificial intelligence

Another major factor in the automation picture is artificial intelligence. AI is, in a sense, an attempt to give computers judgment ability. Traditional computers do exactly what they are told--no more, no less. When faced with a complicated decision, they must evaluate every possible combination. By applying rules of thumb, AI systems attempt to get around this constraint so computers can be applied to more complex decision-making tasks.

AI approaches already are being used in many automation areas, and it seems clear that they will become increasingly important as the price of computer power drops.

One of the most important areas for such tools is in area- and cell-control systems. Each cell and area is unique. No single piece of hardware will be right for every application. The hardest part of putting such systems together is the software to coordinate the different units.

A major benefit of applying AI to manufacturing-control problems is ability to predict what will happen and take the necessary steps, rather than wait until it happens and attempt to recover. This general concept applies to things as diverse as scheduling and QC.

Begin at the bottom

Eventually, all value-added manufacturing processes must go electronic. Events will be captured as early as possible. The control systems will then be networked, keeping in mind that such a scheme cannot be fully distributed or fully hierarchical. It, in fact, will mirror the reporting structure of the organization. The key will be: don't let the process happen unless correct data is entered.

If there is a meaningful manual management-information hierarchy, the information network will provide appropriate data at each level; networking only will improve a successful manual system.

In the end, you must get down to those ugly gray boxes on the shop floor, capturing all data about all material flow and value-added processes from the manufacturing steps themselves.

When running a transfer line, or anywhere there is continuous material flow, you will need integrated and coordinated data capture. All companies should have an information system to coordinate material flow--for every material event, a corresponding information event should occur, and you shouldn't allow that information event to involve people and paper, unless it absolutely can't be automated.

US manufacturers have buried this problem in one of the slickest accounting misrepresentations of all time. It's called indirect labor.

Why is the person moving material in and out of workstations on a forklift not considered direct labor? That person is counted as indirect labor. And, as long as that ratio is in line, those labor hours won't be examined.

Don't hope to accomplish productivity goals if you first can't reduce the indirect labor moving material and information around in the factory--indirect labor is, in effect, industrial America's current data-communications network.

In recent years, Toyota achieved as much as 28-percent annual growth rate, and 10-percent annual growth rate, and 10-percent annual productivity improvement. Five percent of the cost reductions were from its Kanban system; 80 percent were from redesign of plants and machines to reduce indirect labor.

Unfortunately, too many US-production facilities that are coming on-line are the best 1985 implementation of 1953 production thinking in the entire world. We no longer can affod to automate manufacturing operations that re fundamentally ill-conceived.

Six points for strategic planners

Industrial planners must closely examine the showcase manufacturing projects. The planners of those projects forced themselves to completely rethink manufacturing. In many cases, they initiated product redesign to accommodate automation techniques before embarking on computerization of all information capture.

Strategic manufacturing planners should keep the following points in mind:

* Check manufacturing goals against corporate busines goals. Business goals have to be considered with all automation projects. Your company must be able to build a network that allows for communication between the top and bottom of the management hierarchy.

* Rigorously check a suppliers willingness to work closely with your company toward strategic goals. An aerospace company recently went out for bids on an FMS, receiving responses from a group of domestic suppliers and a Japanese supplier. The Japanese supplier's price was about 10 percent higher, but the quoted work in progress was three days vis-a-vis five weeks, floor space was roughly one third less, head count to operate the system was roughly half, and the supplier was willing to guarantee product quality. Whoever heard of a US machine-tool builder willing to guarantee quality?

* Sign up users early in the planning stages. This country, as a manufacturing culture, talks very little with operators on the shop floor. Yet who knows more about the realities of the equipment and processes?

* Don't try to be perfect, especially the first time around. I have seen project after project where the designs and concepts are technically marginal, but the payback is tremendous. May experience is that if you understand the process well enough to automate it, you can frequently achieve 50 to 75 percent of the payback by simply managing the process better. Don't get too enamored with technology.

* Audit all projects after completion; check current operational data against original justification data. If original reasoning is no longer valid, then don't do the project again!

* Learn from mistakes. Misguided automation projects won't help reach your productivity goals. They will tie up capital and resources, leading you astray. Manual systems are easier to change in most cases than automated systems. Examine life cycle forecasts, then determine whether or not it's wise to automate a production function.
COPYRIGHT 1985 Nelson Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1985 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Allmendinger, Glen A.
Publication:Tooling & Production
Date:Oct 1, 1985
Previous Article:Bar codes in manufacturing - part 3, learning the basics.
Next Article:PC-based CAD-CAM is on the move.

Related Articles
LANs: manufacturing's prodigal child.
Product and service Listing.
GE Fanuc marks AC servo milestone, launches 30i CNC.

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters