Producing reliable forecasts.
Further, the concept of accuracy in business forecasting is often misunderstood and hence measured or used inappropriately. In many organizations, acceptable forecast errors are defined by arbitrary, set limits (e.g., [+ or -]5%) across the board for a range of businesses, products, and/or activities without taking into account differing market conditions, differing trade activities, and other issues. This could be just like a shooting contest between two people with different guns--one with a modern, laser-guided rifle and the other with a musket. The question is "How could we expect the same results when the output is dependent not only on performance but also other uncontrollable factors?"
A forecast (short- to long-term and periodic or rolling) can support us in building more relevant and flexible plans and making quick and informed decisions only if it's reliable. A forecast is reliable if--and only if--it is: Actionable, Unbiased, and reports a realistic measure of the Risk it carries.
1. A Forecast Must Be Actionable
In business, the reason we forecast is to make better decisions--to decide whether to continue doing the same thing or to do something different. A forecast must be actionable before it even can be considered in a decision or warrant corrective actions. To qualify as actionable, it must:
* Be relevant and specific to the decision being made,
* Have a lead-time compatible with the decision's lead-time, and
* Be reasonably accurate for the decision.
Imagine sailing in a yacht. We have a detailed plan to get from A to B. Because of the unpredictable weather--just like the business environment--we can end up deviating from our planned course. Once we notice our likely deviation, we ask the navigator, "Where are we going to end up if we continue with our current course?" Suppose the answer is "We are going to hit the lighthouse." We then choose to change course to avoid the undesirable outcome. In a yacht we can change course by moving the rudder, but in business we can add more promotions, cut costs, delay innovations, or take any combination of actions that can change the future outcome. The navigator's forecast in this case was relevant and specific to the decision--it helped us decide what to do.
Now suppose it takes 10 minutes to stop the yacht or change course. There isn't much we can do if the navigator tells us that we will hit the lighthouse in five minutes. The five minutes is the forecast's lead-time, and the 10 minutes from the moment we decide to change course until we actually start changing course is the decision's lead-time. They need to be compatible. In business, a decision's lead-time varies from company to company and from business to business within a company.
The navigator could have gone the extra mile to impress us by forecasting "We are going to hit the lighthouse six inches away from its left window." Clearly this level of accuracy is unnecessary--all we are interested in is if we are going to hit the lighthouse. The exact point of collision is of no use to the original decision (should we change course or not). A forecast is based on many assumptions, some about uncontrollable factors like the weather in our example. A forecast is almost never exact. If it is, then it's by pure chance, so the level of accuracy can't be a major influence on our decision. Yet the forecast needs to be reasonably accurate, such as whether we are going to hit the lighthouse or not.
2. A Forecast Must Be Unbiased
An unbiased forecast has a 50/50 chance of overestimating or underestimating the actual. Bias in forecasts, which is caused by systematic errors that tip this balance, can make the forecast misleading and can significantly hinder its accuracy in the long term. Bias also masks other causes of variation, such as the state of the economy, competition, and our trade activities that are the main drivers of the risk in forecasts and can prevent us from measuring the risk realistically.
The dangers of bias in forecasting are widely recognized in the business community but don't seem to be tackled strategically. Most often, improving a business forecasting process is seen as:
* Tackling bias by arbitrary adjustments or "second guessing," which, in fact, can end up adding to the problem rather than solving it because a bias pattern can change without notice.
* Changing the process altogether, which, in reality, rarely makes any significant improvement because bias behavior can still infect the new process.
Bias in business forecasts is caused by human behavior and can infect a forecasting process in two different ways:
Through its input--systematically but unconsciously--often caused by:
Framing bias -- having too narrow an approach to judge occurrences of future events or their relevance to the forecast; constantly missing important information.
Impact bias -- the tendency to consistently overestimate or underestimate the impact of a future event(s).
Anchoring bias -- overly relying on selective past events when building future forecasts or using some past events as an anchor to evaluate new information even when there is no correlation.
Confidence bias -- frequently overestimating or underestimating an activity's potential.
From within the process through deliberate acts, most often caused by:
Confirmation bias -- the pressure or tendency to interpret information to support a given belief: "This forecast needs adjusting ... our target is ..."
Motivational bias -- having an incentive to reach a certain outcome:
Punishment: "This forecast is too low...they will cut our budget."
Reward: "Lower forecasts means lower targets to achieve."
Rational bias -- simplifying inappropriately; inability to combine impacts of different events rationally: "My overall margin of error is [+ or -]15%: [+ or -]10% from department A and [+ or -]5% from department B."
Compensation effect -- repeatedly underforecasting in the last period(s) in a forecasting cycle to compensate for overforecasting in earlier periods or vice versa: "It's better to have bad news once in a year rather than every month."
Bias is the main enemy of forecast reliability. In fact, some of the economic problems we face today, especially in the financial sector, are because of biased forecasts that overlooked or masked the reality. It is vital to a business to identify the root cause of bias and eradicate it. The idea of detecting and separating causes of variation isn't something new. It was invented by Walter Shewhart, an American philosopher and a pragmatic statistician in the 1920s who developed statistical process control (SPC) and was known as the father of modern quality control. He published his theory in the 1939 book, Statistical Method from the Viewpoint of Quality Control. Ever since, it has been an integral part of quality control in engineering and manufacturing.
3. A Forecast Must Report the Risk It Carries
The only certainty about a forecast is that it will be wrong. What we need to know is by how much and in what way it could be wrong. In other words, we need to know, with a given degree of confidence, the range of all tangible outcomes.
In business forecasting, we typically aim to choose the most likely outcome based on our baseline projection and a set of assumptions about a few "key" future events and their impacts on our performance. The events can be internal, such as our future trade activities (promotions, pricing, etc.), or external, such as market conditions (competition, exchange rates, etc.).
Further, each assumption carries a certain level of risk that adds to the overall risk in the forecast. Therefore, we need to know the likely risk associated with each key event as well as the risk around our baseline. This combination forms the overall risk in the forecast, which is particularly important in medium- to long-term forecasts when our key assumptions weaken and increase in number. Such knowledge enables us to build more flexible plans and models with a clearer understanding of our probable future position, which will help us be better prepared to mitigate risks and exploit opportunities as and when they present themselves.
Many forecasters don't bother with the risk, and some consider the risk only in the baseline or simply guess the overall risk in the forecast. In such a dynamic environment as business, it's vital that we account for future events and the risk they impose and not see the future in light of the past only. Measuring the risk in forecasts is also too complex to be reliably guessed in one step: We need a structured approach to help us continually assess our assumptions about future events and be equipped to combine their risks rationally.
Again, the idea of estimating a parameter by specifying a range to capture its true value with a given degree of confidence isn't new. In a wider context, this is known as interval estimation, which is attributed to Jerzy Neyman, a prominent American mathematician who was one of the principal architects of modern statistics. But interval estimation techniques use historical data to determine the bandwidths for measuring the risk, and what we need in business is a method that can also incorporate the risks associated with future factors that impact our performance.
The value of rolling forecasts and their ability to provide a better insight into the future has been demonstrated in many publications. Yet it's worth noting that the longer the forecast lead-time, the less accurate and unreliable the forecast will be. It was revealed in a 2007 survey by the Hackett Group of 70 corporations in North America and Europe titled "Global Study: Forecasting--Ready for Turbulent Times?" that more than 80% of the surveyed businesses couldn't produce accurate enough forecasts beyond a quarter ahead. Therefore, it would be more productive if a rolling forecast is expressed as a range of tangible outcomes to give some guidance as to its accuracy. This is especially true if the range considers all the relevant key internal or external events and combines their impacts rationally.
Improving Forecast Reliability
Producing actionable forecasts depends on how well we understand our business, which is improved through learning and experience. But to spot bias in forecasts and measure the risk attached to them rationally, we also need supporting tools so we won't get bogged down with complex mathematical procedures.
Let's use Unilever as an example of improving forecast reliability. Unilever is one of the pioneers in seeking a methodical approach to eliminating bias and measuring risk in forecasts. An internal review of the company's forecasting policies and practices in 2004 revealed a number of concerns, including "the inability to:
* state objectively whether forecasts are biased ... whether the situation has improved or deteriorated ...
* spot quickly that a forecast has become biased--it is of little value to establish after the event that a forecast is biased ...
* report realistic measures of the risks attached to forecasts ...
* aggregate risks accurately ..."
Over the past five years in my work with Unilever, we have developed a set of methodologies and software solutions to address these concerns. These practices have helped put in place and maintain more transparent, consistent, and reliable forecasting processes. They have been particularly successful in Unilever's operations in the U.S. and the U.K., where they have been in use since 2007 and have been deployed to assess overall performance and to support individual forecasting processes. In particular, there are two tools that are designed to improve periodic or rolling forecast reliability: Forecast Scorecard and Range Forecast.
Forecast Scorecard is used to measure and monitor forecasting performance on a continual basis and to identify likely bias cases in real time. It is also used to set and check the plausibility of targets and monitor volatility and possible change in market conditions. Forecast Scorecard detects a wide range of bias patterns observed in several businesses worldwide, measures overall performance and progress, and reports the margin of error for the next forecast. It also facilitates a fast-paced but effective learning environment by reducing irrelevant variance analysis and helping people focus on real issues. See Figure 1 for an example. The tool takes forecast errors for multiple processes as input and signals the likely "bias infected" ones according to the strength of the evidence against them -- yellow if suspect, red if likely, and black very likely. It does this in real time -- once a forecast error is entered. It also assesses and ranks each process for management reporting. Other features in the tool include charting the results of a process to help with root cause analysis and reporting the margin of error for the next forecast.
[FIGURE 1 OMITTED]
Range Forecast is a driver-based method used to measure the overall risk attached to a forecast and its individual components (subforecasts). It is also used to produce rolling range forecasts and aggregate forecasts and their respective risks at many levels such as regions, countries, business units, brands, trade activities, and market conditions. It can be employed at any stage of a product's life cycle (with or without historical data), and it apportions the overall risk among the risk components to help someone focus on the key drivers of risks and opportunities. See Figure 2 for an example. The input to Range Forecast comprises information on assumptions/future events and historical data (actuals) if available. The tool outputs a rolling forecast of the baseline with up to 12 periods ahead (month or quarter) together with a range (upper and lower limits) for each projected baseline to capture the corresponding actual. It does this with a user-specified level of confidence and in both graphical and tabulated forms. The tool also provides charts and tables to profile overall risk and the risk attached to each assumption/event.
[FIGURE 2 OMITTED]
As you can see, to be able to operate and survive in this economic downturn and be in a stronger position in the aftermath, we need to base our plans and decisions on reliable forecasts--forecasts that can be trusted and that reveal a realistic measure of the risk they carry.
Bijan Tabatabai, Ph.D., is director of BBS Ltd. (www.bbsconsultants.com), working with business practitioners worldwide to develop tools and methodologies to solve common chronic management problems. Previously, Bijan was a university professor in the U.K., where he led, developed, and delivered mathematically based courses and directed Ph.D. research. You can reach him at firstname.lastname@example.org.
|Printer friendly Cite/link Email Feedback|
|Date:||Jun 1, 2009|
|Previous Article:||Fraud in the nonprofit sector? You bet.|
|Next Article:||Building a fill-in form in Excel.|