Printer Friendly

How "rough-cut" analysis smoothes HP's supply chain.

Product-development teams rarely have the time, data, or tools to calculate the supply chain costs of design alternatives. But such costs need to be considered early in the design process if overall product lifecycle costs are to be minimized and excess inventory and lost sales avoided. Hewlett-Packard's "rough cut" approach to "design for supply chain" facilitates that analysis, in the process unlocking millions of dollars of benefits.

It's been said that about 80 percent of a product's cost is decided long before any metal is cut or plastic is molded. In the same way, a large part of a product's supply chain complexity is determined right there on the drawing board.

But before anyone starts pointing fingers at the product developers, let's look at the constraints they face. Customer expectations and competitive pressures have increased across many industries. As a result, R&D groups have less development time to achieve ever-greater product functionality. This is certainly true in the high-tech sector in which Hewlett-Packard (HP) operates. Although most designers are aware of the advantages of, say, component commonality and reuse, supply chain efficiencies are not always uppermost in their minds.

That is because most of the critical design-for-supply chain (DfSC) decisions have to be made in the early phases of the product development lifecycle. We have found that even with the support of seasoned financial analysts, the thousands of HP engineers making daily design decisions normally don't have the time, data, or tools to perform detained DfSC analyses. As a result, the impact of product design on supply chains is often ignored and decisions, if they are considered at all, are made using overly-simplistic functional metrics such as materials cost and time-to-market. The consequences can be severe: copious excess inventory and lost sales which, for large global companies such as Hewlett-Packard Co., can add up to millions of dollars a year.

"Design for supply chain" is not a new concept to us at HP. We have used elements of DfSC, such as postponement and commonality, in several successful projects. (1) Yet DfSC has often met with resistance from product managers and designers. Our recent work, however, has led us to implement a systematic, repeatable, and broad-based approach to making design decisions and quickly evaluating whether to analyze a decision in more detail. We call this our "rough-cut" analysis technique.

HP is now diffusing this approach across the company. As a result, supply chain costs are now being considered by far more development teams than ever before. Rough-cut is easily understood and accepted by senior management and by nonengineering functions such as marketing and finance. We have found that this approach bridges the gap nicely between purely qualitative assessments and analytical model-building. It has led to less "churn" on decisions--in other words, once a decision is made, it is more likely to stay made. The success to date has confirmed our belief that rough-cut techniques for product-design decisions can help unlock hundreds of millions of dollars of benefits for companies like HP.

The Story of the Square-Hole Racks

When Hewlett-Packard and Compaq Computer merged a few years ago, HP's supply chain managers felt the pain of a relatively minor design difference: HP's mid-range servers used mounting racks with round holes, while Compaq's comparable servers used racks with square holes. As a result, the combined company had to order, stock, and distribute 12 different rail kits for mounting servers to cabinet racks. HP's customers, of course, couldn't care less about the shapes of the holes in the racks as long as they had the right kit needed to install their product. But the rack design had a huge impact on supply chain costs over the product's lifecycle. The eventual decision: create five common rail kits for both families of servers at an expected lifecycle savings of $32 million in reduced material and inventory costs. The customers experienced benefits as well: The same rail kits could be used across their various racks and servers.

The outcome of that seemingly insignificant decision is typical of the gains made possible by our DfSC program. It shows the opportunities that exist to improve commonality among parts and subsystems--particularly for those parts that do not differentiate a product, such the cooling fans for servers, power supplies for printers, and power cables for PCs. Parts designed for one system can be reused in new designs and can come from preapproved suppliers. (See the sidebar highlighting HP's "six pack" of DfSC benefits.)

We have progressively tackled such opportunities across HP's largest product units--the Imaging and Printing Group, Personal Systems Group, and Enterprise Storage and Servers Group. These efforts have saved more than $100 million since the program's inception. The savings have come primarily from the reduced costs of materials, inventory, and packaging.

SPaM: The Roots of DfSC

HP has had a formal DfSC program for two years now, with author Brian Cargille leading the effort. The roots of the program, however, reach back into the late 1980s, when HP formed an internal consulting team--Strategic Planning and Modeling (SPaM)--and staffed it with industrial engineers and management scientists. SPaM's job was (and is) to support strategic decision making with data-driven analyses. By 1995, the team had completed many projects that helped HP dramatically reduce inventory levels while improving order fulfillment to customers.

In the beginning, SPaM only dealt with product design as an input to the supply chain decision-making process. At that time, engineering designed the product architectures and bills of materials, which were then accepted by the procurement, manufacturing, and distribution groups. Given these fixed constraints, SPaM helped the supply chain organizations set appropriate inventory-stocking policies, reduce lead times, improve production-planning processes, and design efficient and robust supply chain networks.

However, despite these improvements, some distribution centers (DCs) inevitably had too much of some products and not enough of others as most high-volume HP products at the time were "built to stock." The guiding sentiment seemed to be that you "have a hunch and make a bunch." Two major projects, however, helped to quantify the supply chain benefits that strategic planning and modeling could unlock from product-design decisions.

The first project was with the DeskJet business. HP produced finished goods at a factory in Vancouver, Wash., and transported them by land and sea to DCs around the world. As unit volumes in Europe and Asia began to increase, the costs and lost revenues of having the wrong number of localized printers grew ever larger. SPaM realized that the transportation times were too long and the forecasts too uncertain for the factory to ever get it right. The big opportunity was in postponement. Despite the required changes to the product-design and the manufacturing processes, SPaM was able to show that localizing generic printers at regional DCs resulted in big financial benefits.

The second project was with the LaserJet printer, which was assembled in the United States and shipped to the various regions for localization. But even with postponement, HP faced significant inventory write-offs in North America and lost sales in Europe. The problem was caused by a key sub-assembly sourced from a Japanese supplier. This "engine" included a dedicated power supply (110V for North America or 220V for Europe) that had long planning, manufacturing, and logistics lead times. As a result, the number of products destined for each region had to be forecasted far in advance. In this case, the opportunity was in parts "commonality." SPaM found that in some cases the supply chain benefits of a universal power supply more than offset the additional price charged by the subassembly supplier.

These and other DfSC projects during the early to mid-'90s were typically sponsored by supply chain (SC) organizations directly after a particularly painful product shortage and/or inventory write-down. SPaM helped senior management quantify whether the supply chain benefits of a significant design change would offset the engineering costs. Although R&D owned the decisions creating the supply chain costs, they didn't feel the impacts because R&D was measured against product functionality, material costs, and time-to-market--not downstream inventory, logistics, or warranty costs.

SPaM's neutrality and credibility helped convince general managers not only to allow R&D to make decisions that went against its traditional performance metrics but also in many cases, to give the function credit for acting in the company's best interests overall. The result was to raise awareness of DfSC in the R&D community and to reinforce the need for early involvement of SC and new-product-introduction (NPI) engineers on cross-functional product-development teams. Such experiences allowed HP to capture the fundamental lessons of design for supply chain in a highly regarded series of papers and case studies. (2) But there was a growing feeling that HP still wasn't deriving the maximum benefits from design for supply chain. It was this feeling that set the stage for innovation.

Different Ways of Making Design Decisions

The turning point was a project we worked on with the high-end server division during the summer and fall of 1998. Because of SPaM's reputation in conducting DfSC analyses, the supply chain (SC) and R&D managers jointly sponsored a project to determine the best combination of product designs and supply chain networks for a new family of products scheduled to be launched in several years. SPaM staff were being invited into the world of design decision-making; we were no longer just being asked to correct for a particular design decision that would cause significant supply chain costs.

At the beginning of the project, we spent many long phone conferences trying to identify the decision to be made and the alternatives to be considered. We struggled. Historically, SC had come to us with a very clear decision proposition--"Should InkJet printers be localized at regional DCs?" or "Should LaserJet printers use a universal power supply?" But in the server project, many sequential and interrelated decisions had to be made over the next several months.

As with most SPaM projects, we began by performing a simple analysis that captured some of the major cost drivers. Such a step helps to scope a project and focus subsequent modeling efforts on the most important issues. After much debate among members of the extended team, we decided to focus on several important decisions involving about two dozen alternative product architectures and supply chain networks. Design decisions included, for example, whether to integrate memory modules with processor modules; supply chain decisions, for example, included whether regional sites should configure servers from a preassembled "base box" or manufacture them directly from the various components and subassemblies.

As the SPaM team began the process of modeling, we started to realize just how fluid this decision environment was. Two to three weeks after our scoping meeting, we sent "data collection templates" to the development teams. In our follow-up phone calls we learned that a few smaller decisions had already been made, and several alternatives were no longer being considered. The team also asked if we could consider several new alternatives.

To try to cope with this situation, we built a flexible (albeit complex) model that allowed us to explore various alternatives and to conduct sensitivity analyses on the many assumptions that were still up in the air. During the model-building and data-validation phase, we created a set of approximate analytical techniques to determine whether the output was in the right ballpark. Were the inventory savings from commonality reasonable? Did the lower yield of the alternative memory supplier really have that much impact on the material costs? Were the correct number of assembly machines being calculated given the cycle times and batch set-up time?

As we completed the analysis, compiled the results, and documented our recommendations to management toward the end of 1998, we came to a realization. Some recommendations were surprisingly similar to the insights we generated at the beginning of the project from the simple spreadsheet. Many other recommendations could have been determined using the approximate techniques that we developed to validate the model. Although the initiative generated tens of millions of dollars of benefit--and a critical set of decisions still required comprehensive modeling--we now understood the three major ways in which most product-design decisions differ from supply chain decisions in terms of the analysis each requires.

First, in the early phases of product design, there are many simultaneous decisions and numerous feasible options, all of which must be evaluated against marketing, engineering, supply chain, and financial criteria. By contrast, most supply chain strategy projects focus on a single decision and a small number of alternatives. (Think of the decision to build a new factory, for instance.) In this latter environment, it's beneficial to build a comprehensive analytical model to explore the alternatives and generate a robust recommendation. With DfSC, however, the many hundreds of possible design and supply chain alternatives result in a complex model that takes a long time to build (and debug), requires a lot of assumptions and data, and can become a "black box" to decision makers.

Second, most product-design decisions must be made in days or weeks, not months--especially in our high-tech sector. With strategic decisions, however, the tens of millions of dollars at stake (and the risk of being wrong) justify the three to four months required to conduct a comprehensive analysis. By focusing only on the strategic decisions in the server project, we missed countless opportunities to use data-driven analyses to answer DfSC questions that simply passed us by.

Third, most design decisions are made by R&D engineers, not managers. Yet, only managers have the authority and budget to sponsor multi-month projects using comprehensive modeling. Although engineers were now supposed to add downstream supply chain costs to their laundry list of things to consider, they didn't have the tools or training to make day-to-day decisions. It's one thing to know that commonality and postponement are good ideas. It is quite another to know whether one common circuit board for subassembly X is better than three unique ones. The certainty of material costs and time-to-market impacts often trumped potential, but poorly, quantified supply chain benefits.

Based on these lessons, we felt that approximate analytical techniques were just the solution that was needed for this different decision environment. Appropriately used, they would allow us to address many more decisions and know when and where detailed modeling was justified. We knew that the cumulative effects of the many day-to-day design decisions were costing HP millions of dollars a year because supply chain impacts weren't being considered. To make further progress with DfSC, we had to reach a new client base: HP's R&D, NPI, and SC engineers.

"Dfx'd To Death"

Now that we had a more open door to the R&D and NPI engineers, we had to develop effective ways of serving them. Not surprisingly, we learned of a widespread feeling that DfSC methods were not appropriate for their organizations. In fact, we were told that they were being "Dfx'd to death," meaning that there was always some "design for" priority that they were being asked to factor into their work. We also found that they lacked the resources they needed to be successful in terms of DfSC. Although they had tools to quantify material costs down to fractions of a penny, they had nothing with which to identify, say, downstream inventory costs. We had to develop those tools for them.

So the next step in our journey was to take our approximate analytical techniques and extend and formalize them into "rough-cut" methods for design-for-supply-chain. By using rough-cut techniques, we make a conscious trade-off between accuracy and speed. Indeed, it is crucial that people understand the dangers of misapplying rough-cut techniques. If the situation is complex and unfamiliar or the underlying assumptions are not valid, using rough-cut approaches can lead to poor decisions and costly mistakes. When there is a lot at stake and the risk associated with being wrong is high (as identified by the rough-cut analysis), a comprehensive financial model is probably required.

In order to deliver these approaches to the development teams across the company, we put together a one-day workshop that captured our best practices related to product design for supply chain and taught engineers when and how to use the rough-cut techniques. The objective was to provide project teams with the intuition and tools they needed to make appropriate DfSC decisions. We also created a set of Web-based calculators, which we made available on the internal site. (3)

Since that time, we have extended our rough-cut methods to consider more than just inventory and material costs. We have developed a set of methods for commonality that include assessing financial impacts across the entire product lifecycle. Only such a broad consideration of the financial impacts would give R&D and marketing the confidence that DfSC was a holistic approach and not a one-sided evaluation of supply chain costs. Additionally, we communicated examples of when certain DfSC techniques should not be pursued.

A Closer Look at Rough Cut

Approximation techniques by themselves are nothing new. What is new is our formal development and dissemination of such techniques for use in a DfSC context.

When we began to see the real value of rough-cut analyses, we sought out the skilled supply chain and NPI analysts whom business-unit leaders recognized as having successfully facilitated DfSC decisions. We began to work with them to discover their techniques and processes. We wanted to refine and validate a set of standard techniques that would work broadly across HP in support of better decision making--particularly about commonality issues. Instead of seeking the perfect answer, those successful analysts used their experience in similar circumstances to create approximate methods for making rapid recommendations to management. We distilled their wisdom down to five guiding principles for DfSC decision making:

1. Focus on the factors affected by commonality by using the relative cost differences between the feasible design options rather than absolute cost differences. It is often very difficult to calculate absolute cost differences correctly.

2. Instead of getting "too analytical too quickly," begin by identifying the qualitative pros and cons. This will help quickly prioritize the important costs for further analysis.

3. Make simplifying assumptions whenever possible. Ask whether a particular factor would affect the decision (but always capture assumptions so that they can be validated later).

4. Evaluate the upper and lower bounds on costs, and perform sensitivity analyses. This will help ensure that recommendations are robust.

5. Build a detailed financial model when three conditions are met: the decision is too close to call, a wrong decision among the remaining options could be costly, and the most significant open issues relate to quantitative (versus qualitative) factors.

Let's put these guidelines in terms of the two alternative decision processes illustrated in Exhibit 1.


Path A describes a design-for-supply-chain project where several analysts begin by building a comprehensive financial model that is subsequently used to evaluate design alternatives. If done correctly (including appropriate data gathering, model validation, and sensitivity analysis), this path can result in an excellent decision and a significant return on investment. But the numerous design decisions, alternatives, and uncertainties of this approach can slow down projects and try the patience of participants and sponsors alike. More importantly, many more design decisions are not evaluated quantitatively at all.

In just about every conceivable DfSC situation, path B should replace path A. The key feature: The team first performs a rough-cut analysis to understand which costs are most significant. In some cases, a decision becomes clear after the first-pass analysis (top branch). In other cases, more detailed modeling is required and justified (bottom branch). (4)

Although SPaM has used the path-B approach for years, we have observed that many less successful analysts jump right into detailed modeling. In some cases, "top branch" path-B projects can achieve benefits comparable to path A in one-fifth the time. Even "bottom branch" path-B projects are usually completed more rapidly than those taking path A. The time advantage is the result of subsequent financial models focusing only on the cost factors that are truly important--and ignoring everything else. In addition, the transparency and simplicity of the rough-cut analysis encourages the decision maker and the extended team to build up their intuitive "muscles" more quickly.

Cost Drivers for Commonality

While working with the analysts, we reviewed nearly 20 projects that were seen as successful. For each, we examined the decision made, the alternatives considered, and the analytical/modeling approach employed. As expected, we found that some costs were more important than others in driving commonality decisions.

We further validated the importance of each cost driver based on our experiences, considering the likely magnitude of the differences between the common and unique alternatives. In other words, if a particular cost was similar across common and unique options, we prioritized that cost category as "low" even if it was a large absolute cost.

As the first guideline described above suggests, gathering the data and modeling these costs is not always worthwhile. For example, freight and packaging costs, while significant for most supply chains, are rarely important for commonality decisions because common and unique parts are usually about the same size, weight, and fragility.

Exhibit 2 summarizes the prioritized prelaunch, production, and end-of-life costs for commonality decisions. As mentioned above in the second guideline, we always recommend that development teams evaluate each of these costs qualitatively to ensure that these priorities remain true for their particular decision.

After documenting the advantages and disadvantages of commonality and reuse for each cost driver, we generated a set of rough-cut techniques. Let's follow one example of the method for service parts inventory as it applies in HP's industry.

Costing of Service Parts Inventory

When contracts with customers include short time windows for replacing failed parts, spares inventory must be stocked at field locations close to customer sites. However, because most products are reliable, some parts at a particular location may never be required. Commonality often enables dramatic reductions in both inventory levels and management complexity at field locations. The rough-cut technique goes through three steps:

Step 1: Estimate the Number of Field Parts

If the installed base of products is proportional to the world's population, achieving a four-hour (or less) replacement window requires about 500 field locations worldwide (there are about 400 metropolitan areas with populations greater than 1 million people). If products are concentrated in certain regions or replacement commitments are less stringent (for example, next day), considerably fewer locations are required.

To calculate the annual inventory cost, three pieces of data are required:

1. Number of field locations--the number of locations required to meet the most stringent service requirements. This data point is available from the support organization.

2. Inventory value (per unit)--the inventory value of each part, often the material cost. The data are available from procurement or R&D.

3. Inventory holding cost (percent)--the annual cost of carrying inventory as a percentage of inventory value, including financing, devaluation, storage, and scrap. Figures are available from finance or supply chain.

Although the amount of inventory at each field location must be estimated, it usually is not necessary to collect data on installed base, expected failure rates, or restocking lead times. In most cases, one unit provides sufficient inventory at each field location. So if one common part could be used instead of three unique parts, field inventory could be reduced by 67 percent (by going from three units per location to one).

The annual inventory cost of each common and unique alternative is calculated by multiplying together the total amount of inventory, the inventory value, and the inventory holding cost (usually between 10 percent and 40 percent). For this example, the savings from buying one part instead of three (in other words, a savings of two parts) would equal $10,000. This figure is based on a calculation of two parts per location for 250 locations at an inventory value of $100 per unit and a holding cost of 20 percent. (5)

Step 2: Validate Field-Stocking Levels

The demand for service parts at field locations is usually small but highly unpredictable. To validate the stocking requirements, three additional pieces of data are required:

1. Installed base (units), or the number of parts at customer sites. The data are available from marketing and/or R&D and are based on part-to-product connect rates--or the number of parts per product.

2. Annual repair rate for parts (percent per year). The likelihood that a part must be repaired based on a combination of failure rates and "no-trouble-found" rates. Data are available from support.

3. Restocking lead time (days). Lead time from the regional DC or supplier to the field stockroom. Figures available from support.

The equation below can be used to estimate the daily demand (D) for service parts at each location (I is installed base, R is annual repair rate, and F is the number of field locations).

D = IR/365F

Imagine that a common part is installed in 100,000 products, has an annual repair rate of 1 percent, and requires four days to restock. Assuming that the installed base is evenly served by 250 field locations (that is, 400 installed-base products per location), the average demand per location is only 0.01 units per day (four units per year). Even if this part were to be required for a service call, it's unlikely that another unit would be needed while inventory is being restocked--average demand over the four-day lead time is only 0.04 units. Similarly, for the "unique parts" scenario, we could calculate demand over the lead time. If three unique parts instead of one common part are used, the demand over the lead time for each part would be some percentage of 0.04 units (depending on the usage split of the three parts across the installed base). For example, if the first unique part is used in 60,000 products, demand over the lead time would be 60 percent of 0.04 units.

Assuming a service level objective of 95 percent or better, the table below shows inventory guidelines based on the expected demand over the lead time (D times L, where L is lead time). (6) For our example, the assumption of one inventory unit per location seems reasonable, giving us confidence in our estimate of the inventory savings of commonality calculated above.

Step 3: Verify Assumptions and Perform Sensitivity Analyses

Because it is possible that demand across the field locations is uneven, calculating inventory units based on "average demand" (per the equation for D above) may underestimate the actual requirements for the "high demand" locations. For example, if some field locations serve areas with particularly large installed bases or if the failure rates are not uniform, analysts may wish to further refine the assumptions and calculations. In performing a sensitivity analysis, we find that a field location serving more than about 450 installed products (vs. the 400 assumed above) probably requires two inventory units, while a location serving more than about 2,700 installed products probably requires three units. (7) To increase the confidence of the team and decision maker regarding the final recommendations, we could perform similar sensitivity analyses on repair rates and lead times and recalculate the potential savings of commonality.

After presenting the qualitative and quantitative analyses to the decision maker and extended team (which usually includes key representatives from R&D, procurement, supply chain, and marketing), one of three things usually happens: a decision is made; a tentative decision is made, pending follow-up on some questions and the recalculation of a few numbers; or the decision is "too close to call" based on the rough-cut analysis, but making the right choice is important to the organization. In the last case, the decision maker would sponsor a subsequent project to model the financials in more detail.

Buy-in Across the Organization

We are a long way from those early days when a rough-cut analysis was applied randomly--if at all. Today, we have significant buy-in from more people in the design community. We've consciously consulted with our internal clients all along, so we now have the right to push the program further and faster. We believe that a combination of consulting projects, custom workshops, Web-based training, rough-cut tools, and telephone and e-mail mentoring will allow us to effectively serve HP's thousands of R&D, NPI, and SC engineers.

Many of the NPI and R&D engineers we now work with say it's the first time they've had robust and rapid methods that are consistent with how they make their design decisions. As a result, supply chain costs are being considered by far more development teams today. Standardizing on a pan-HP approach has brought credibility to the methods, leading to much faster decision making and increased alignment across functions.

We'll close with one example of the value that rough cut is bringing. Over the 2004/2005 winter holidays, HP's industry-standard server group was given less than four weeks to make several commonality decisions. Although the two NPI program managers were initially overwhelmed, they successfully used the commonality framework and rough-cut techniques to perform an analysis. With some mentoring from the DfSC Program Manager for Enterprise Storage and Servers, the team identified almost $2 million in commonality savings and was able to tell which parts should remain unique.

The team also identified an opportunity for further modeling: Common packaging could provide additional savings but would require a common chassis and bezel (the product's external case) at an additional cost. The dollars at stake were significant, and the correct answer wasn't clear. The management and development teams were very satisfied that the rough-cut analysis was completed so quickly and that it was so comprehensive.

Perhaps most impressive, however, was the relative inexperience of the analysis team. It was the first time that the NPI managers had been exposed to that level of trade-off analysis. The fact that they were able to make credible recommendations in such little time speaks to the power of this type of rough-cut approach.

Authors' note: The authors wish to acknowledge the contributions of Mike Meyers and Scott Ellis. Mike Meyers, DfSC Program Manager for Enterprise Storage and Servers (ESS) Group, sponsored the development of the rough-cut techniques at ESS, helped to extend them, and is driving their implementation. Scott Ellis, Director of HP Strategic Planning and Modeling, leads the SPaM team that created the fundamental approaches which led to our rough-cut techniques, is a key sponsor of HP's DfSC Program, and provided thoughtful improvements to this paper.

HP's Six Pack of Design-for-Supply-Chain Elements

1) Control Variety: Companies can trade off supply chain costs and lost sales to determine which product variants are justified in terms of margins, brand equity, and/or channel requirements. Example: HP's business PC organization reduced inventory by 42 percent while increasing product availability by moving from 107 modules and 95 options to 55 modules and 49 options.

2) Enhance Logistics: Companies can compare distribution costs with design and material costs. Example: Reducing the physical size of an Ink Jet printer by 45 percent saved more than $1 per unit.

3) Assess Part Commonality and Reuse: Evaluate the use of unique parts vs. common, reused, or industry-standard parts. While unique parts can enable product distinctiveness, common parts often reduce inventory costs. Furthermore, reused and industry-standard parts frequently accelerate time-to-market. Example: HP's server business saved $32 million in lifecycle costs by moving from 12 to five "rail kits" for mounting servers on racks.

4) Evaluate Postponement Options: Determine whether it is worthwhile to design products and manufacturing processes to delay the point of differentiation/customization until end-customer demand is better known. Example: A new product-customization process for Laser Jet printers in Europe has achieved a fill rate of more than 98 percent with less than two weeks' supply of finished-goods inventory.

5) Design for Tax Reduction: Decide where to source parts and assemble products. Taxes and duties for components, subassemblies, and products will be different based on the country of origin. Example: A printer's networking functionality is engineered onto a removable card built in a low-tax location, saving more than $10 million.

6) Design for Takeback: Consider product and packaging changes to reduce reverse supply chain costs. Example: A design change has increased the recycling of Ink Jet supplies by 25 percent.


(1) See, for example: Edward Feitzinger and Hau L. Lee, "Mass Customization at Hewlett-Packard: The Power of Postponement," Harvard Business Review, January-February 1997: pp. 116-121.

(2) See, for example: Hau L. Lee and Corey Billington, "Managing Supply Chain Inventory: Pitfalls and Opportunities," Sloan Management Review, Spring 1992: pp. 65-73. Tom Davis, "Effective Supply Chain Management," Sloan Management Review, Summer 1993: pp. 35-46. Laura Kopczak and Hau L. Lee, "Hewlett-Packard Desk Jet Printer Supply Chain (A) and (B)," Stanford Teaching Case, 1996. Edward Feitzinger and Hau L. Lee, "Mass Customization at Hewlett-Packard: The Power of Postponement." Harvard Business Review, January-February 1997: pp. 116-121. Hau L. Lee, "Product Universality: The HP Network Printer Case," Stanford Teaching Case, 1999.

(3) Brian Cargille and Robert Bliss, "How Supply Chain Analysis Enhances Product Design," Supply Chain Management Review, Sept/Oct 2001: pp. 64-74.

(4) After building several financial models for some types of design decisions (such as packaging trade-offs for "logistics enhancement"), we may find a recurrence of the same parameters and calculations. In these cases, we create a reusable tool to help analysts conduct both rough-cut analyses and detailed modeling.

(5) The examples used in this article are for illustrative purposes only. The numbers were selected for clarity and to simplify the mathematics and do not reflect actual data for HP products.

(6) To model the service levels for spare parts, "Poisson arrivals" are commonly assumed. Using this statistical model, failures are independent of each other, and the time of the last failure has no impact on the next failure. This is a good assumption for the long, flat portion of the reliability "bathtub curve," so-named because the plot of failures versus time often traces out a bathtub shape. This occurs because mechanical and electrical products typically experience higher failure rates early and late in the product life, but lower failure rates during the middle.

(7) As demand gets larger and larger, the assumption of Poisson arrivals becomes invalid. In this case, the analyst should use inventory calculations based on demand, forecast errors, and lead times.

Jason Amaral is founder and managing director of Emeraldwise LLC. Brian Cargille is DfSC Program Manager at Hewlett-Packard Company.

Prioritized Cost Drivers for Commonality Decisions

Product Lifecycle Phase

                Prelaunch              Production         End of Life

High     Time-to-market           Material              Service Parts
         Design and Nonrecurring  Inventory               Inventory

Medium   Tooling                  Assembly, Test,       Warranty &
         Prototype                  Rework                Service Events
         Qualification            Expediting            Obsolescence

Low      Part Number and          Freight & Packaging   Take Back &
           Supplier Management    Tax, Duty, Royalties    Environmental

Spare Parts Inventory Needed for
Service Levels of 95 Percent or More

                              Inventory to achieve
                              [greater than or equal]
Demand over the lead time     95% service level

0.05 units or less                   1 unit
between 0.05 and 0.3 units           2 units
between 0.3 and 0.8 units            3 units
COPYRIGHT 2005 Peerless Media, LLC
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:DECISIONS
Author:Amaral, Jason; Cargille, Brian
Publication:Supply Chain Management Review
Geographic Code:1USA
Date:Sep 1, 2005
Previous Article:What you need to know about sourcing from China.
Next Article:5 strategies for high-performance procurement.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters