Printer Friendly

Parallel processing: a design principle for system-wide total quality management.


Total quality management (TQM) based on parallel information processing design principles is observed in inter-firm systems which integrate and mobilize the core competencies of a wide array of teams and organizations to rapidly and effectively solve complex problems, such as those encountered during the design, production and delivery of complex goods and services.

As a design principle, parallel processing deals primarily with how to create processes and systems which achieve the shortest possible cycle times. In production this may be achieved by designing small-lot continuous-flow systems (Wolf and Taylor 1991). This requires system-wide use of TQM for operational process control and improvement. Shorter cycle times in product development require use of more powerful tools for re-engineering product/process systems. This is achieved through system-wide use of TQM methodologies such as quality function deployment, value analysis and engineering and other tools designed to harness simultaneous engineering capabilities throughout the supply chain.

As a general proposition, the extent to which a system can be considered parallel depends on its ability to redesign itself in response to changing performance requirements as defined by the system's internal and external customers. This is expressed in this paper as the depth and breadth of product/process improvement capability. The depth is the extent to which the system is capable of product/process improvement at the level of the individual, or the individual team -- such as the degree of employee involvement and empowerment for continuous improvement or the degree to which TQM methodologies have been mastered. The breadth is the extent to which these TQM capabilities have been extended to include all individuals, teams and organizations in the system -- including suppliers, customers, competitors and other organizations such as government and special interest groups (Schonberger 1990).

The Foundations of System-Wide TQM in Japan

The centrality of parallel processing as a core design principle for TQM is evident in the way in which parallel processing co-evolved with TQM in the particular context of postwar Japan (Emery and Trist 1965, Nishiguchi 1987, Porter 1990).

After World War II, with its industry in ruins, and with a purge of many of the senior members of Japan's military-industrial complex, a new generation of industrialists and managers turned to the expertise of the Allied Command, which had its own incentive to upgrade Japan's industrial capability just to maintain its own administrative viability. Japanese business and educational leaders saw the need for better quality management across the entire industrial system to strengthen the foundations for exportled economic development. The superior industrial capability of the United States had been amply demonstrated and served as the model for Japanese reconstruction.

The foundations for system-wide TQM were laid as the concepts and techniques of statistical quality control became absorbed and diffused in the period immediately following World War II. In 1945, the first week-long course on quality control was offered by W. G. Magil, Civil Information Division of the Allied Occupation Forces. The purpose was to improve the communications system which was needed by the Occupation Forces (Kondo 1988, p. 35F.2). The Japanese Union of Scientists and Engineers (JUSE) was formed in 1946, providing a nucleus for further quality management education throughout Japanese industry (Kondo 1988). In 1950, JUSE invited W. E. Deming to visit Japan, where he taught in JUSE's 8-day course on quality control and held seminars for senior managers (Deming 1982). The system-wide adoption of TQM expanded beyond variance in manufacturing processes in 1954, when J. M. Juran visited Japan to develop courses for middle and senior managers in 1955. By 1961, companies winning the Deming Prize had applied quality management broadly throughout their organizations, including designing, manufacturing, inspection, sales, purchasing and administration (Kondo 1988, p. 35F.3). JUSE continued as a research and training vehicle. The Deming Prize helped to identify companies with progressively more advanced 'best practices' for other firms to emulate or try to surpass. By the mid 1960s, the Deming Prize concept of statistical quality control had expanded to include product design, manufacturing engineering, and supplier relations by applying statistical concepts and methods inside and outside of the company (Kondo 1988, p. 35F.8).

Japanese companies rapidly learned to use the tools borrowed from their U.S. mentors in new organizations and inter-firm systems of their own design. The tools of quality management borrowed from the U.S. did not in themselves require or automatically lead to system-wide TQM. The pattern of use of these tools was profoundly influenced by market and competitive conditions in specific industries. For example, in the automobile industry the Japanese were forced by circumstances to develop their own organization systems based on parallel design principles to overcome the limitations of a small domestic market, a large pool of domestic competitors and constrained capital availability. None of the Japanese car companies could effectively copy the sequential U.S. mass-production model. Instead, to improve quality and lower costs, companies like Toyota began to experiment with and adopt "lean production" (Womack, Jones, and Roos 1990).

While still only a fraction the size of its North American competitors, Toyota managed to match the productivity and product quality of the North American producers (Cusumano 1988). This was accomplished by the mid-1960s using roughly half the U.S. level of invested capital, production man hours, design engineering hours and cycle-time from concept to customer for each new vehicle (Womack, Jones, and Roos 1990). Cost, quality and shorter product life-cycle advantages combined to produce a potent three-pronged competitive weapon -- 'time-based' competition (Bower and Hout 1988, Teresko 1988). Shorter cycle-times came to be seen as an important driver of cost, quality and product differentiation.

If quality management originated in the U.S. and spread to Japan, why didn't U.S. companies develop time-based competitive strategies before the Japanese? It is possible that Japanese companies implemented TQM using a new organization design principle.

The Information Architecture of Parallel Systems

The information architecture of virtually all man-made technology originates from the principle of sequential information processing. The human brain, on the other hand, appears to operate on the principle of parallel processing. For example, computers have programs and data stored in central memory, connected to one or more specialized processors which interpret instructions, one at a time, and process data accordingly. The difference between successive generations of computers, from the first primitive vacuum tube computer to the latest super-computer, lies primarily in the degree to which bottlenecks and sequencing delays can be reduced or avoided. Even after all time-related waste has been eliminated, the basic configuration (centralized memory, high speed processor and sequential operations) falls far short of what is needed to tackle a wide range of complex problems.

The same sequential design principle, and its limitations, can be seen in human organizations. The separation of the roles of management and labor appears as a fundamental feature of the modern corporation. This results in information bottlenecks, which in turn increases costs and introduces a host of quality control problems. Managers take responsibility for thinking while labor's role is defined as following instructions. Larger and 'more efficient' organizations try to reduce these bottlenecks but can never resolve them as they are an inherent feature of their sequential organization design.

The impact of information bottlenecks can be seen by comparing the information processing capability of a von Neumann computer with that of the human brain. Even the fastest present-day computers take hours, even days, to accomplish pattern recognition tasks which a human being can accomplish almost instantly. The comparatively slow speed of conventional computers is not due to the speed of their individual components. Electronic computers have super-fast processors. The relative slowness of computers is inherent in all sequential information processing systems and is a consequence of separating memory from processing capability. In the 1980s, a new design strategy for computers emerged, derived in part from observations about the information processing of the human brain.

"One might suspect that the reason the computer is slow is that its electronic components are much slower than the biological components of the brain, but this is not the case. A transistor can switch in a few nanoseconds, about a million times faster than the millisecond switching time of a neuron. A more plausible argument is that the brain has more neurons than the computer has transistors, but even this fails to explain the disparity in speed. As near as we can tell, the human brain has about ten to the tenth power neurons, each capable of switching to more than a thousand times a second. So the brain should be capable of about ten to the thirteenth power switching events per second. A modern digital computer, by contrast, may have as many as ten to the ninth power transistors, each capable of switching as often as ten to the ninth power times per second. So the total switching speed should be as high as ten to the eighteenth power events per second, or 10,000 times greater than the brain. Yet we know the reality to be just the reverse. Where did not calculation go wrong?" (Hillis 1985, p. 3)

Computers are slower than brains because their hardware is used inefficiently.(1).

"In a large von Neumann computer almost none of its billion or so transistors do any useful processing at any given instant. Almost all the transistors are in the memory section of the machine, and only a few of those memory locations are accessed at any given point in time. The two-part architecture keeps the silicon devoted to processing wonderfully busy, but this is only 2 or 3 percent of the silicon area. The other 97 percent sits idle. At a million dollars per square meter for processed, packaged silicon, this is an expensive resource to waste ... As we build larger computers, the problem becomes even worse ... This inefficiency remains no matter how fast we make the processor because the length of the computation becomes dominated by the time required to move data between the processor and memory. This is called the von Neumann bottleneck. The bigger we build the machines, the worse it gets" (Hillis 1985, p. 5).

The same problem occurs in human organizations where it is possible that thousands of on-duty employees may not be doing any useful work-related thinking at any given moment. This is not merely because humans are fallible, lazy or easily distracted. Organizations are, in many cases, designed to forcelose on thinking as an option for most employees most of the time.

The alternative is to design systems in which thinking and doing -- memory and processing in the case of the computer -- occur simultaneously in all parts of the system. For computers this means that the von Neumann configuration with a single, superpowerful processor linked to a massive memory is replaced by a huge number of relatively small, simple processors, each with its own memory. The information architecture must still have some means of issuing instructions to the processors. In human organizations it takes on various forms such as personal involvement and learning as well as empowerment to make decisions as individuals, in teams and as an integral part of the system-wide processes in which their work is embedded.

It is simple to join memory and processing in a computer. The hard part is to develop a strategy for dividing the information processing task at hand. The way in which the problem is decomposed into smaller sub-problems varies according to the nature of the problem to be solved. Machines are now built when hundreds of thousands or even millions of tiny memory/processing cells which, taken together, have computational power many orders of magnitude greater than conventional super-computers. The difficulty lies in harnessing the computational power of parallel processing to address the applications of interest.

The basic strategy of parallelism is to decompose complex problems into sub-problems. A huge number of operations can be performed at the same time by 'sub-contracting' the elements of the problem to an array of integrated memory/processor devices. In any situation where a task can be divided into sub-tasks that are more or less independent of each other, it will be faster to perform these tasks simultaneously. This is how parallel processing achieves quantum improvements in cycle-time. Key design issues include selecting the right number of processors, deciding how to connect the processors, determining the level of sophistication of each processor, and coordinating information flow through the entire network of processors.

In comparison with computer applications of this design principle, human organizations are highly amenable to parallel processing. Given the opportunity, people are capable of utilizing intelligence and creativity at work. The 'algorithms' which drive human organizations can be based on the general-purpose learning competencies of motivated people. This is a source of flexibility and self-organization not available to the computer architect. What is required, at a minimum, is basic mathematical and communication skills, basic experimental learning techniques, a facility for developing and conducting interpersonal relationships and an organization dedicated to team-building from the shop-floor to the executive suite.

System-wide TQM enables human organizations to build the competencies needed for parallel processing within and between organizations. The simultaneous processing capability of individual TQM problem solving teams (Nishiguchi 1989), is the basic building block. The inherent bottlenecks of sequential processing are resolved by sub-dividing tasks among many smaller and slower processors (teams), rather than assigning the work to a faster single processor (a senior manager acting as problem analyst and decision-maker).

Parallel Processing as a Design Principle in System-wide TQM

Mass production is inherently a long-cycle process. Where competition is sensitive to time, long lead times result in diseconomies. The beginnings of a parallel processing in a production system can be traced to Toyota. One of the major breakthroughs in the Toyota production system came when production lead times were slashed by eliminating changeover delays in the metal stamping process. By simultaneously running one job and performing set-up for the next job, costs were lowered, product variety was increased, quality was improved and lead times shortened (Shingo 1989).

The lessons learned from small-lot production principles resulted in major strides in overall system performance as quality management was extended on a company-wide basis (Toyoda 1987). Toyota found that the full benefits of small-lot production can only be achieved when TQM includes suppliers. To be fully involved in the TQM process, suppliers must be capable of creating their own product/process technology. Early and extensive supplier participation in product development distributes product/process design responsibility across a wide array of suppliers. This allows design and production problems to be identified as early as possibly by suppliers which have special competence in a limited range of technologies (Clark 1989).

When design problems go undetected until later in the development process there is exponential growth in the cost and time required to make corrections. Late changes necessitate the redesign of larger and larger product/process subsystems and pose the risk of introducing new problems. Delays in finding mistakes during the early stages of the design process compound the tendency for costs and delays to snowball at later stages. A very fast, well integrated, error free design process is the ideal. This is most readily achieved when there is a very high degree of parallelism in the supply chain.

Shorter design cycle-times are partially explained in terms of the use of simultaneous product/process engineering within organizations (Fujimoto and Sheriff 1989) and partially by the use of parallel product/process engineering among suppliers (Taylor 1991). Suppliers act together to form an extended design team in which each supplier is responsible for simultaneously creating its own product/process designs and coordinating their design efforts with the requirements of their nearest neighbors in a chain of customers. These sets of teams comprise a "massive array" of small-scale, multi-functional "information processors" which simultaneously solve an interlocking set of product/process problems.

TQM enables the development of a supply system in which there is an array of suppliers responsible for all aspects of product and process design, including drawings, prototypes, tooling and procurement decisions for entire product sub-systems. Each supplier's independent ability to simultaneously design its own product and process technology creates opportunities for continuous improvement. Organizing the supply chain in this way allows a radical decomposition of the design and manufacturing process and a dramatic drop in the number of direct manufacturer/supplier contact points. Reduction of the supply base provides for an enormous simplification in communication complexity and allows more intensive interaction with a smaller group of suppliers. When manufacturers buy from TQM system suppliers they concentrate on performance specifications rather than detailed drawings. Using advanced TQM engineering methodologies such as quality function deployment, value analysis and simultaneous engineering, suppliers translate information about the performance requirements of their customers into specific product and process designs.

The design and engineering work of these suppliers does not wait for other steps in the design process to be complete. The entire supply base can proceed at the same time by using the same TQM methodologies. The decomposition of overall product design into performance specifications for major sub-systems allows the manufacturer and its suppliers to work in a parallel process in which all levels in the supply chain simultaneously engineer their own products and processes with minimal hierarchical coordination and control. This eliminates delays and bottlenecks, and brings problems to light while they are still inexpensive and easy to fix. The design process is also stabilized by allowing a mix of products to be designed at the same time. In a production setting this is referred to as mixed loading, where a mix of products is produced on one production line to smooth out and maximize capacity utilization. Mixed loading in the design process reduces the cost of the finished product by smoothing and maximizing utilization of engineering capability throughout the system. Finally, by distributing development tasks to a wide array of sophisticated processors (suppliers), the manufacturer is free to focus on the aspects of product development which cannot be decomposed.

The ability to distribute the design and development process gives manufacturers using system-wide TQM a distinct advantage. By simultaneously engineering the product and process there are innumerable opportunities to simplify, eliminate waste, substitute superior or less costly materials, and develop innovative solutions to diverse and rapidly changing patterns in customer requirements.


System-wide TQM enables parallel processing of information within and between organizations. The distribution of product/process improvement responsibility across a "massive array" of decision making teams within and between organizations provides a parallel information architecture for achieving the shortest possible cycle times in a broad range of activities, most notably under conditions of high complexity (i.e., design and development of a complex product/process system) and high uncertainty (i.e., rapidly changing, variable or difficult to comprehend customer requirements).

Short cycle times are crucial where markets are uncertain, product life-cycles are short and time is a critical determinant of efficient resource use. Through continuous product and process improvement, TQM offers both individual organizations and interfirm systems the possibility of competing simultaneously on price, service, product quality, product differentiation and rapid response to changing market conditions.

As the level of product complexity increases, so does the importance of parallel processing capability. Complex products require mastery of a rapidly changing combination of technologies. System-wide TQM permits a division of labor in which companies are joined in a parallel processing system, where each company undertakes only a small portion of overall product/process design responsibility. For highly complex products, thousands of organizations are potentially involved at the design stage, incorporating a vast range of technologies which no single organization can master.

In practice, TQM is often implemented through pilot projects and other isolated initiatives, which creates only a low degree of parallelism. In the United States many companies claim to have implemented TQM but have not done so on an organization-wide or system-wide basis. For example, using quality function deployment once or twice to demonstrate that a small number of managers and engineers have mastered the techniques falls far short of the requirements for parallel processing, where the entire supply base would need to be saturated with this capability.

Even in Japan, where system-wide TQM evolved over decades, a high degree of parallelism is most evident in a handful of internationally competitive manufacturing companies and their suppliers. The era of parallel processing, which began in Japan's TQM movement, may now unfold in a race between Europe, Japan and North America. Each context will provide unique opportunities to develop new ways to use parallel processing as a design principle for high-speed organizational systems.


1 When von Neumann and his colleagues designed the first computers, the processors were made of expensive switching components, such as vacuum tubes. Memories were made of relatively slow and inexpensive components. The appropriate design for the time was a two-part configuration that kept the expensive vacuum tubes as busy as possible. This two-part design, with memory on one side and processing on the other, is called a von Neumann architecture and is the basic design of almost all computers built today. It has been so successful that most computer designers use it even though the technological reason for the split between memory and processor is no longer justified (Hillis 1985).


Abegglen, J. and G. Stalk, The Japanese Corporation. New York: Basic Books, 1985.

Bower, J. L. and T. M. Hout, "Fast-Cycle Capability for Competitive Power," Harvard Business Review. (November-December), 1988, pp. 110-118.

Clark, K. B. "Project Scope and Product Performance: The Effect of Parts Strategy and Supplier Involvement on Product Development." Management Science, (October 1989), pp. 1247-1263.

Cusumano, M. A., "Manufacturing Innovation: Lessons from the Japanese Auto Industry." Sloan Management Review, (Fall 1988), pp. 29-39.

Deming, W. E., Quality Productivity and Competitive Position. MIT Center for Advanced Engineering Study, Cambridge, Mass., 1982.

Emery, F. E. and E. L. Trist, "The Causal Texture of Organizational Environments," Human Relations, (1965), Vol 18, pp. 21-32.

Fujimoto, T. and S, Antony, "Consistent Patterns in Automotive Product Strategy, Product Development, and Manufacturing Performance -- Road Map for the 1990s." International Motor Vehicle Program, International Policy Forum, May, 1989.

Hillis, D., The Connection Machine. London: MIT Press, 1985.

Juran, J. M. (ed). Juran's Quality Control Handbook. New York: McGraw Hill, 1988.

Kondo, Y., "Quality in Japan," in Juran's Quality Control Handbook, Juran, J. M. and F. M. Gryna (eds), 1988.

Kondo, Y., "JUSE -- A Center for Quality Control in Japan." Quality Progress, (August 1978), pp. 14-15.

Nishiguchi, T., "Strategic Dualism: An Alternative In Industrial Societies," unpublished PhD thesis, University of Oxford, 1989.

Nishiguchi, T., "An Examination of the Japanese "Clustered Control" Model and the Alps Structure." International Motor Vehicle Program, International Policy Forum, May 5, 1987.

Porter, M., The Competitiveness of Nations. New York: Free Press, 1990.

Schonberger, R. J. Building a Chain of Customers. New York: Free press, 1990.

Shingo, S., A Study of the Toyota Production System From An Industrial Engineering Viewpoint. Cambridge, Mass.: Productivity Press, 1989.

Taylor, G., "Parallel Inter-firm Systems: The Quality Movement and Tiered Supply Infrastructures in the Automobile Industry in Japan and North America," Unpublished PhD dissertation, York University, 1991.

Teresko, J., "Speeding the Product Development Cycle," Industry Week. July 18, 1988, pp. 40-42.

Toyoda, E., Toyota, Fifty Years in Motion: An Autobiography by the Chairman. Tokyo: Kodansha Press, 1987.

Trist, E. L., "Referent Organizations and the Development of Inter-Organizational Domains," Human Relations, (1983), Vol 36, pp. 269-284.

Wolf, B. and D. Taylor, "Employee and Supplier Learning in the Canadian Automobile Industry: Implications for Competitiveness," Don McFetridge (ed), Foreign Investment, Technology and Economic Growth, Calgary: University of Calgary Press, 1991.

Womack, J. P., D. Jones, and D. Roos, The Macine that Changed the World. New York: MacMillan, 1990.


Dr. Glen Taylor, Assistant Professor, Department of Management and Industrial Relations, College of Business Administration, University of Hawaii at Manoa, Honolulu, HI, U.S.A.
COPYRIGHT 1993 Gabler Verlag
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Special Issue: Strategic Quality Management
Author:Taylor, Glen Hearst
Publication:Management International Review
Date:Feb 1, 1993
Previous Article:Quality strategy and TQM policies: empirical evidence.
Next Article:Supplier service quality: a necessity in the 90's.

Related Articles
The total quality association.
TQM: a paradigm for physicians.
Learn to use TQM as part of everyday work, not as a buzzword.
How do safety, ergonomics and quality management interface?
The voluntary standard that is now mandatory.
Managing the CIM-TQM partnership.
TQM: it really works!
The role of expert systems in improving the management of processes in total quality management organizations.
Critical implementation issues in total quality management.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters