Printer Friendly

New product forecasting horizons and accuracy.

New Product Forecasting Horizons and Accuracy


Products created from new technologies present exciting possibilities to the marketing manager, but many forecasts of market size and growth for these products have historically been overstated, sometimes very badly. Accurate planning decisions cannot be made using them because of the clear potential for disastrous consequences. A recent article by the authors examined some of the reasons behind this optimism, and offered an empirically derived deflator to reduce the size of a forecast if the decision maker has cause to believe it is overstated for one or more reasons[21].

However, a single number to deflate the forecasts does not represent the most precise approach to coping with the problem because the time frame of the forecast must be considered. A one year projection should be viewed differently than a five year estimate. This article examines a few of the major practical factors responsible for overly optimistic forecasts, and offers a preliminary look at an empirically derived, time sensitive forecast deflator. As an example, superconductivity is then discussed as a technology where deflated forecasts might be appropriate.

Overly Optimistic Factors

Major reasons for over-optimistic forecasts presented in the earlier article were[21]: * Incorrect use of forecasting models or techniques * Failure to understand the forces driving the marketing

being forecasted * Biases introduced when using forecasting techniques,

including the possibility there's "something in it" for the


Use of Models

A forecasting technique, or model, presents a mathematical abstraction of reality in a form more easily manipulated than reality itself. The better the correspondence between the model and reality, the more legitimate its use and conclusion[13].

Some forecasting models may be applied to a wide variety of situations. They work by discovering underlying mathematical patterns in historic data and then extending them into the future. Regression and time series analysis are examples of two commonly used techniques whose application depends largely upon the characteristics of the data, not upon any intrinsic correspondence to the dynamics of a situation.

Other models are specific to a particular industry. New York Life Insurance Company can forecast sales with a model incorporating their particular productivity levels, agent retention rates, hiring practices, and training budgets. Massachusetts Mutual Life Insurance Company could use the same model; only the numbers would be different in their case. Here then, the abstraction is good and the information specific, but the model is only useful to insurance companies.

Finally, there are literally thousands of models created for particular situations that are not easily generalized. These are ad hoc, unpublished, and of unknown accuracy, IBM's forecasting models cannot be used by anyone else in the computer industry, because they don't have exactly the same mix of products or clients.

There is a continuum of models, ranging from those that provide much detail for specific situations to those that provide little detail but fit all situations. All of these focus on a target variable, a time series, and a serious problem follows from their implied logic. Since they forecast by extending the mathematical evaluation of a time series, the question is, "What made the historic data go up or down?" If the reasons aren't known, then the causes are assumed to operate in exactly the same manner during the forecast. Based on the evidence, however, this has not usually been the case.

During the 1970s, a number of "high tech" companies achieved fantastic growth rates. Prime Computer's sales, for instance, went from $1.8 million in 1973 to $267.6 million in 1980. The stock market reacted strongly as Prime went from 1 5/8 in 1977 to 32 1/8 in 1981 with several splits. However in 1981, growth did not meet market expectations or stock analyst's forecasts and negative recommendations led to a decline from 32 1/8 to 11 1/2 that year. What were the reasons the forecasts were not met? Inflation declined significantly but had been "built into" the time series used. The assumption that it would continue had consequences for forecast accuracy and also for the company and its shareholders. On the other hand, constant dollar revenue showed a steady increase throughout the period.

Step one in evaluating a forecast, then, is to know exactly what model was used, what data went into it and what assumptions were made.

Market Forces

One must also know the market. To say so is trite, yet many seem to believe that mathematically complex models are a substitute for market knowledge. The forecaster that understands the market's needs will do better than the one that doesn't. If they understand the customer, they understand the reasons for choosing one from among many complex new products. Complete reliance on mathematics leads to a complete failure to understand the customer. Most of the enthusiastic personal computer forecasts of the early 1980s were based on interviews with manufacturers and vendors of PCs, a seemingly knowledgeable, though hardly unbiased source[1]. A huge potential market was identified; unfortunately, the market's intention to purchase was ignored. No one conducted research among potential users to determine whether they would actually buy the product.

Similarly, forecasters and consultants in the television industry once predicted 20 to 25 major cable TV services, such as Home Box Office, with customers paying $100 per month for subscriptions to a half dozen or more services[8]. Companies that put satellites into orbit to meet forecasted demand sustained major losses as that demand failed to materialize[12]. Forecasts of consumer acceptance of cable systems with over a hundred channels were based not on realistic assessments of consumer desires but on wishful thinking[7].

Primary market research, as opposed to market watching, can eliminate some difficulties. Focus groups permit relatively thorough understanding of the needs, views, and attitudes of target markets. Careful use of surveys, questionnaires, and interviews sharpen information acquired from focus groups. Such market driven methodology is somewhat useful within the industry in that it helps establish demand before pointing to technology that will satisfy it.

Forecasts based on a number of surveys of end users are usually more accurate, though more expensive than a single estimate. By averaging several, one can incorporate more divergent information from various sources[9, 10, 11, 16]. Many remember the now classic University of Michigan's consumer survey of 1,000 households which produced a better prediction of inflation than that of 50 professional economic forecasters.

One of the most important and complex variables in forecasting markets for new high technology products is what is called consumer anxiety. Potential consumers can be deterred from buying complicated products by the stress which results from having too many expensive, sophisticated high tech alternatives from which to choose, and the cognitive dissonance which may result if such products do not perform as anticipated. In 1984, Time carried an article entitled "Bothered and Bewildered," about the "trauma of shopping for a microcomputer"[2]. According to the article, consumers were afraid to buy computers because of apparent turmoil among manufacturers, fear of obsolescence, technological complexity, too many brands that were too similar, and concerns about software incompatibility. Forecasters ignored consumer anxiety and predicted record sales.

Business customers were also frustrated by an excess of choices in the PC marketplace. This reaction was labeled "computer shock" in a Business Week article which warned that unless something was done to alleviate the difficulty of choosing among an overwhelming number of brands, "decision paralysis" would occur[4]. Forecasters failed to heed that warning as well.

Step two in evaluating a forecast, then, is to realize that numbers have implications for client behavior. Forecasts that require large changes in the way people do things are always to be suspected.


An accurate forecast is a mixture of skill and luck. Selecting the right model with the right specificity and getting as much information as possible about the market are just two parts of the problem. Since forecasts are, by definition, a look into the future, you must say something about uncertainty. Uncertainty means we don't know exactly what is going to happen, but we should have some idea of what could happen and its probability of occurring. Unknowns such as end user requirements, product acceptance, market penetration, competitive reactions and a host of other factors must be considered.

Psychological research has shown that people with strong belief in their abilities avoid evidence that contradicts their viewpoint and emphasize that which supports it[6, 15, 20]. Similarly, if there is a possibility that someone stands to gain by having people believe an optimistic forecast, consciously or not, the likelihood that huge markets exist is going to be exaggerated. So when a forecaster discusses uncertainty, it must be kept in mind that selective perception is the rule. The consequences are that any parameters estimated are suspect. If a forecast says "Chances of product acceptance are good" or "Consumer demand will skyrocket," then one must simply realize that these are partly opinions and deflate them accordingly.

How do we identify the presence of selective perception? We probably cannot, but we know that people like information that supports them and ignore that which doesn't. Many forecasts are made by people who might have something to gain if their scenarios are accepted: stock market analysts, product planners, marketing consultants, market research staff, and so on. No one deliberately distorts expected performance if a forecast is in their best interest, but selective perception makes it difficult for them to objectively evaluate available information.

The third step in evaluating a forecast, then, is to consider the extent to which the forecaster is better or worse off if you believe the forecast. If there is something in it for them, then obviously deflate the predictions.

The Deflator Function

In this preliminary look at forecasting over time, we were interested in comparing actual (A) with forecasted (F) values of sales for one to five years out. The measure we are evaluating is the ratio: A/F. Other measures have been used to evaluate forecasting accuracy, but this seems to be the most appropriate in view of the fact that most forecasts are generated by the mathematically sophisticated for the mathematically unsophisticated (10, 14).

Subjective biases will continue to operate in forecasting as long as there is uncertainty, which results from insufficient past experience. While better forecasting methods help reduce uncertainty, the increasing rate of change of technology, market competition, regulations, and consumer preferences seem to move faster than managerial acceptance of the techniques. New product forecasting has been made more explicit by various models that allow the factoring in of subjective probabilities of deviation of the actual sales from what was expected or the most likely value[19].

The products chosen for this preliminary study were personal computers, artificial intelligence, and fiber optics. Our sources were primarily on-line computer literature searches and the popular business press, such as The Wall Street Journal, Fortune, and Barron's. Results are shown in Table 1[3, 17, 18, 22].

Table : Table 1 Average Ratios of Actual to Forecasted Sales Years Forecasted Ahead
Products 1 2 3 4 5
Personal .63 .58 .52 .43 .41


Artificial .95 .77 .61 .56 .48

Fiber-optics .78 .46 .39 .38 .34
Overall .79 .60 .51 .46 .41


It is clear that the further out in time the forecast goes, the poorer the accuracy. This is not surprising but what is surprising, however, is how bad the performance was for just one year ahead: actual sales were only 79 percent of the average forecast. One year forecasts were about one-quarter higher than actual figures, the five year forecast about two and a half times higher.

The question is: why are they off so far? Were these errors caused by the wrong model, or poor market knowledge, or bias? First year error is probably not caused by the wrong model, and market knowledge would have to be very poor to miss estimated size by more than one-quarter. So it seems the majority of first year errors caused by bias. Second through fifth year forecasts, however, could be in error for any of the three reasons. If the wrong model were chosen, it would show up here, as would faulty market information. Bias, of course, can show up anytime.

Superconductors and Forecasting

A recent example of a technology expected to create huge new markets is superconductivity. As was the case with the three products examined in Table 1, this scientific breakthrough has created excitement. High temperature ceramic superconductor devices carry electricity without resistance or power loss, the ultimate "free lunch." They are considered by most experts to be the third age of electronics. Magnetically levitated trains, unmanned satellites, power generation and storage, superior medical imaging, ship propulsion, and super quiet submarines are just a few of the grandiose applications currently being discussed[5].

To say that excitement surrounds this technology is a gross understatement. A recent typical forecast, for example, estimated that consumer product sales would go from zero in 1988 to $2.3 billion in 1992[18].

Electronic Predictions

Dialogue Information Services contains an electronic scoreboard of forecasts called "Predicasts - PTS US Forecasts," an electronic compilation of published forecasts for superconductors[18]. Its availability and easy access highlights a final problem inherent in forecasting: definitions of the market. Table 2 presents some of the many superconductor forecasts available. [Tabular Data Omitted]

Most recent actual sales should be the starting point for evaluating forecasts of superconductor usage. It would seem to be no problem to forecast the dollar value or the number of units manufactured and sold, especially for such a currently popular technology. But such is not the case.

Some of the 1986 sales figures for superconductors are shown in Table 3. The figures are actual sales for superconductor materials in 1986. This compilation highlights the failure to define exactly what is being measured. If we cannot agree on what happened last year with known products and materials, then we cannot hope to predict what will happen in the future. The raw ingredients or components used in manufacturing need to be differentiated from the final product. The base needs to be adequately defined. Forecasters will always end up with different numbers when they start out from different beginning points.

There is not a shortage of superconductor market forecasts. Given the availability of electronic data bases, anyone can instantly turn out a forecast on any superconductor topic. Are they accurate? Corporate and personal decision making based upon these will be badly treated by reality. The same psychology that make these forecasts so exhilirating sometimes makes them grossly biased. There is risk in allowing forecasts to dictate a firm's business strategy or a nation's long range goals. Decision makers need to think critically and objectively about the hundreds of forecasts floating about. Managerial judgments are subject to biases, especially in areas where managers lack experience, such as forecasting the future, next year, or tomorrow.

Table : Table 3 Superconductor Forecasts

Forecasts $ Millions

1. Oxford Instruments 1986 sales of
 superconducting magnets $100
2. World Sales (American Metal Market, 9/87) $155
3. Aircraft components (Chemical Week, 8/87) $200
4. World-wide market (New York Times, 1/88) $250

5. Materials spending (Strategic Analysis,
 Reading, MA, 188) $290
6. NMR equipment sales (Clinica, 9/87) $612

Management Strategy

The prudent manager will think deeply about the source of a new product sales forecast. Our investigation focused on average performance. Therefore, any individual one year estimate should be evaluated vis-a-vis long range forecasts and bias. We found that some one year forecasts were quite accurate, but that average actual sales numbers were 79 percent of forecasted values, therefore, some first year estimates were wildly optimistic. Any next-year forecast of personal computers, artificial intelligence or fiber optics entering the planning process should be multiplied by 0.79 unless there is ample reason to suspect that it may be better or worse than the average. (This discussion ignores the biases of the person reviewing the forecast, but to consider that possibility compounds the problem enormously.)

The problem of what to do with a five year forecast is less straightforward. Since it is not clear that the problem is bias, the door is actually opened wider to personal prejudices. Consider that choosing a forecasting model and using market information have both objective and subjective components. If the forecaster is biased, the subjective components will lean toward the desired end, be it optimistic or pessimistic. The first year effect therefore becomes more pronounced. The best thing the manager can do is deflate the forecast for a five year period by an average 41 percent (based on the average that forecasts were above actual sales), and be thankful that most planning decisions concerning personal computers, artificial intelligence, and fiber optics are made one year at a time.

Finally, combining the various superconductor forecasts and deflating them gives estimates for the total market, including NMR, research, and components as shown in Table 4. The conservative forecast is usually, but not always, the most accurate. It may not be the most exciting, but then there is nothing inspiring about corporate failure either.

Table : Table 4 Forecasts for Superconductor Market
 1989 1990 1991 1992 1993
Average $680 $1,020 $1,920 $2,150 $2,590


Deflated $537 $612 $979 $989 $1,062


[ 1.] Blundell, G. "Personal Computers in the Eighties." Byte,

January 1983, pp. 168-178.

[ 2.] "Bothered and Bewildered: First Time Buyers Face a Tougher

Choice than Ever." Time, November 19, 1984, pp. 142-144.

[ 3.] Computer Industry Abstracts. Data Analysis Group, La Mesa,

California, First Quarter, 1987.

[ 4.] "Computer Shock Hits the Office." Business Week, August 8,

1983, pp. 46-53.

[ 5.] Eisler, A. C. "Super High-Tech: Superchips and Superconductivity."

U.S. Naval Institute Proceedings, October

1988, pp. 154-156.

[ 6.] Evans, J. "Psychological Pitfalls in Forecasting." Futures,

August 1982, pp. 258-265.

[ 7.] Hong, R. "Future Technology Trend and the Related Market

Forecast." Review of Business, Vol. 10, No. 1, 1988, pp. 19-23.

[ 8.] Landro, L. "Pay TV Industry Facing Problems after Misjudging

Market Demand." The Wall Street Journal, June 29, 1983,

p. 13.

[ 9.] Mahmoud, E. "Accuracy in Forecasting: A Survey." Journal

of Forecasting, Vol. 3, 1984, pp. 139-159.

[10.] Makridakis, S. "The Art and Science of Forecasting." International

Journal of Forecasting, Vol. 2, 1986, pp. 15-39.

[11.] Makridakis, S. and R. Winkler, "Averages of Forecasts: Some

Empirical Results." Management Science, September 1983,

pp. 987-996.

[12.] Marcom, J.J. "Satellites Search for Business as Demand

Misses Predictions." The Wall Street Journal, January 11,

1985, p. 21.

[13.] Martino, J.P. Technological Forecasting for Decision Making,

2nd ed. New York, North-Holland, 1983, pp. 53-63.

[14.] Mentzer, J. and J. Cox. "Familiarity, Application and Performance

of Sales for Forecasting Techniques." Journal of Forecasting,

Vol. 3, No. 2, 1984, pp. 27-36.

[15.] Nisbett, R. and L. Ross. Human Inference Strategies and

Shortcomings of Social Judgement. Englewood Cliffs, New

Jersey: Prentice Hall, 1980.

[16.] Newbold, P. and C.W.J. Granger, "Experience with Forecasting

Univariate Time Series and the Combination of Forecasts,"

Journal of the Royal Statistical Society, Vol. 137, Part

2, 1974, pp. 131-165.

[17.] "Personal Computer Markets." International Data Corporation,

Framingham, Massachusetts, June 1983.

[18.] "Predicasts." Bibliographic Retrieval Service, Latham, New

York, March 1987 and January 1989, Updated monthly.

[19.] Urban, G. "Sprinter Mode III: A Model for the Analysis of

New Products." Operations Research, 1970, pp. 805-854.

[20.] Waagenar, W.A. "A Generation of Random Sequences by

Human Subjects: A Critical Survey of the Literature." Psychological

Bulletin, Vol. 77, 1982, pp. 65-72.

[21.] Wheeler, D.W. and C.J. Shelley. "Toward More Realistic

Forecasts for High-Technology Products." Journal of Business

& Industrial Marketing, Vol. 2, No. 3, 1987, pp. 55-63.

[22.] "Where Venture Capital Is Investing Now." High Technology,

March 1988, pp. 19-25.

Charles J. Shelley is Assistant Professor of Management at Suffolk University in Boston, Massachusetts. David R. Wheeler is Chairman of the Marketing Department and Associate Professor of Marketing at Suffolk University.
COPYRIGHT 1991 St. John's University, College of Business Administration
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1991 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Shelley, Charles J.; Wheeler, David R.
Publication:Review of Business
Date:Mar 22, 1991
Previous Article:The volatility of short-term interest rates.
Next Article:The effect of the Brazil suspension announcement on returns to large U.S. bank stocks.

Related Articles
The forecasting process: guidelines for the industrial engineer.
Forecasting and Empirical Methods in Finance and Macroeconomics.
New Horizons Computer Learning Centers Largest Computer Training Provider in World, Simba Study Shows.
Individual Investor sells newsletter to Horizon Publishing.
Individual Investor sells newsletter to Horizon Publishing.
Nu Horizons Electronics Corp. Awarded Lockheed Martin STAR Supplier-Facility Award; Second Time Designation Granted for Company's Supplier Excellence...
Sherwin-Williams colormix[TM] '08 forecast maps out latest design trends.
Bushnell, Nosler celebrate 60 years.
How informative are interest rate survey-based forecasts?
Measuring risk: Bijan Tabatabai introduces a new and methodical approach to range forecasting that he has developed in co-operation with the Unilever...

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters