Preventing and Fixing Forecasting Errors

Leo Tolstoy wrote, “All happy families are alike; each unhappy family is unhappy in its own way,” in his book Anna Karenina. Good forecast processes share many common traits, many of which I explain in this course and my other budgeting or forecasting courses. Sadly, forecasts go wrong in many unique ways. We’ll look at a few of these in this article. I also give tips for preventing and fixing problems.

Model Errors

When I wrote “model error” in the title above, I realized what an abdication of responsibility that is. It’s like a production manager who blames a variance on a machine. Forecast models don’t create themselves and provide assumptions for themselves. People do. Those people are often financial types like me and you. We have met the enemy, and it is us. The good news is that many of these problems are preventable and easily fixed.

Some of the errors are black and white. For example, a formula calculates results that are clearly unreasonable. The fixes for this are good model testing and control processes. That’s a whole course in itself, but I want to highlight a few items.

Variance analysis is an effective way to maintain model accuracy. Backtesting is running past inputs in a model to see if the outcomes match expectations. Those expectations are often the actual results seen in the past. This is done before model results are shared with the company for management purposes. It’s a way to use past data as test data to find model errors or test hypotheses about results.

Many simple errors occur when data is manually input into a model or imported into the model. Anytime data is transferred from one source to another, there should be a check that the data in the receiving system matches the source data.

Stress testing and sensitivity analysis are excellent ways to test models. Formula errors may not become apparent in small changes of assumptions, but produce highly unreasonable outcomes with large changes in assumptions. Once you trust the model, stress and sensitivity testing can show when big changes in the business environment compound to catastrophic or favorable changes in income.

An incredibly useful step in a model run is comparing the current output to the output from the last run. An example of this is comparing the current month’s forecast to the prior month’s forecast. One reason to do this is to identify any large changes. They may be due to model error. Another cause for this is a major change of assumptions from the last model run to the current model run. We can now see the effect of those changes. The reviewer assesses whether the outcome changes are logically consistent with the assumption changes. A summary of the reviewer’s observations and the implications of those observations may also be communicated to report users when the forecast is distributed.

Design Limitations

Model errors may be from design limitations. Models should be built with the level of complexity that leads to good information to inform good decisions. Here are some examples of when this doesn’t occur:

  • Model assumptions may not be detailed enough for accurate forecasting. Example: A simple average price or cost across all forecast periods becomes inaccurate as the product mix changes or the business environment causes changes.
  • Model calculations may not be complex enough to handle important aspects of how assumptions translate to outcomes. Example: The model doesn’t anticipate large step-cost increases when production volumes exceed a certain level. It instead assumes a constant level of fixed costs and variable per-unit costs.
  • The modeling tool is too simplistic for the size and complexity of the company. Example: An uncertain future leads to a range of potential outcomes. Larger and more complex companies may want to use stochastic models (i.e., probability-based models that incorporate randomness) like Monte Carlo analysis rather than a deterministic model (i.e., single assumptions lead to a single outcome). I actually had a discussion with a regulator once about when financial institutions should make this modeling leap.
  • The model formulas are too simplistic to accurately forecast outcomes. I’ve built models where I knew at least most of the drivers of outcomes. What I struggled with was developing the correct formula to translate those assumptions into reasonable outcomes. The short-term fix is to use simple trend analysis of outcomes for forecasts (adjusted subjectively for changes in assumptions) rather than using an inaccurate driver-based formula. Over the long term, keep testing formulas until a reasonably accurate one can be found.
  • Assumption or input data is unavailable or incorrect. Example: I worked at a company where we identified a key performance metric for which we wanted to set monthly targets for the next year. The problem was that we had never tracked the components of the metric, and it would take time to build data upon which to make a forecast. We built data over the next year to make a forecast for the following year. There have also been times when source data was so inaccurate that it needed “scrubbing” to be useful. Like the bullet above, you may be able to use trend analysis of outcomes for forecasts until the driver data is clean enough to be used in a driver-based forecast.

Stress testing, sensitivity testing, backtesting, and variance analysis are ways to identify model errors from design limitations. Simple trend analysis of outcomes, adjusted for future expectations, may be the best way to forecast in the short term until better data can be developed.

A simplistic model or system may need to be replaced with more robust modeling as a company grows. The new model will need to run in parallel with the old model until it’s ready to be relied upon. During that time, you can also compare actual-to-forecast reporting from the new and old models to see which has lower variances to actual results.

Another mistake is trying to have one model do everything. Specifically, the model for scenario or business portfolio analysis will be very different than the model for a detailed budget. The analysis models have much fewer inputs, details, and constraints. They are optimized for flexibility to identify areas of risk to be mitigated or areas of opportunity to explore. Adding too much complexity to these models increases the possibility of errors, makes model management much more difficult, and degrades the processing time of the model.

Models for detailed forecasts, like those used for a budget, have many more inputs and dimensions. Data is broken into dimensions like time (e.g., months or quarters) and departments. The additional details can provide more accuracy and precision. Many of the inputs are amounts for which managers will be held accountable. 

Why Forecasts are Wrong – Human Error

To paraphrase the lyrics of a hit from the 1980s, “We’re only human, of flesh and blood we’re made… born to make mistakes.” All of us are prone to thinking errors. These can lead to bad forecast assumptions and bad model design. Let’s look at some common thinking errors and how to mitigate them.

The Importance of Human Error

I teach behavioral finance in an MBA program at a university. My behavioral finance courses are very popular on continuing education sites. Many of us find it fascinating how we humans are prone to thinking errors and decision-making mistakes that lead to bad decisions. A 2010 study of over 1,000 decisions found that the variance in the return on investment of the decision was due to:

  • 8% – The quality of the analysis that was done
  • 39% Variables of the industry sector or company outside the decision-makers’ control
  • 53% – How the decision was made (i.e., the quality of the collaboration and process) [1]

Much of this course (and – really – most continuing education for CPAs) focuses on analysis, which qualifies it as a “technical” course by the continuing education standards. However, the “non-technical” decision process is much more important to the relative performance of a company.

Overconfidence, Overoptimism, and Overprecision

Overconfidence can be devastating to forecasts and decision-making. There are three related terms that should be separated to identify three forms of modeling error:

  • Overconfidence: Estimating the company’s abilities too highly.
  • Overoptimism: Assigning too high a probability to a favorable economic and business environment.
  • Overprecision: Using too small a range for model uncertainties.

Overconfident leaders use discount rates that are too low when calculating the net present value of cash flow. Overconfidence can lead to competition neglect, which means forecasts ignore or underestimate the actions of competitors. When modeling a strategy, we have to anticipate the reactions of the market. This includes shifts in the patterns of both customers and competitors.

Financial executives are prone to underestimating market volatility. They succumb to an extrapolation bias where past financial conditions are predicted to continue more than is warranted. In other words, conditions change more than they anticipate. This leads to poor model assumptions and design.

An egregious example of this was when some models for pools of mortgages before the Great Recession didn’t allow for a decrease in home prices. It wasn’t just that decreases weren’t entered as an assumption; the models didn’t allow negative numbers. Home prices hadn’t gone down nationwide for decades. It was assumed they never would in the future. Unfortunately, home prices dropped an average of 20%, triggering massive unanticipated losses and starting a domino effect of financial failures.

In his book You’re About to Make a Terrible Mistake[2], Olivier Sibony did a great job summarizing studies of our vulnerability to overprecision when he said, “Simply put, when we’re 90 percent sure, we’re wrong at least half of the time.”

As the complexity of a decision increases, decision-makers are more likely to rely on gut instinct to make decisions. Forecast reporting that simplifies complex scenarios can reduce this tendency. I’m not saying that forecast models need to be simplistic, but the information must be summarized and synthesized into a handful of options for decision-makers. Too many details will overwhelm the decision-makers’ analytical abilities, causing them to resort to instinct.

Interestingly, Daniel Kahneman recommends simple models in low-validity environments (i.e., when there’s a high degree of uncertainty or unpredictability). His research is foundational to the field of behavioral finance and economics. Here are two quotes from his book “Thinking, Fast and Slow:

  • “To maximize predictive accuracy, final decisions should be left to formulas, especially in low validity environments.”
  • “Formulas that assign equal weights to all the predictors are often superior because they are not affected by accidents of sampling.”

The natural tendency of financial analysts is to add model complexity to reduce modeling error. This is a form of overconfidence. We often overestimate our understanding of the business environment. The model gets too complex by half. Model error increases.

I’ll give the great Daniel Kahneman the final word on planning overconfidence: “Executives overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or deliver the expected results – or even be completed.”

Variance analysis and the graphing and variance statistical calculations I explain in my post titled Graphing and Adjusting Forecast Bias are ways to show that we make mistakes and to identify the bias in our errors (i.e., overestimating or underestimating).

I won’t get into the details of improving our estimates in this article. For that, you may want to check out my behavioral finance course or my sensitivity analysis and scenario planning course. I also highly recommend Douglas Hubbard’s book How to Measure Anything. A big part of improving estimates is understanding confidence intervals, which goes beyond the scope of this short course. I will explain one method after defining anchoring in the next section.

Anchoring

Anchoring occurs when we begin the estimation of a number based on some other number, no matter how irrelevant that other number is to a rational estimation. That other number is the “anchor.” We have a very difficult time adjusting far enough from that anchor to make a correct estimate.

It’s easy for a financial analyst to throw out a number early in a conversation that could taint the estimate of a subject matter expert they are consulting for a model assumption. Even experts are subconsciously swayed by anchoring.

A method to break the power of an anchor is to consciously try to find arguments against the anchor number.

Another method is to have the expert think of two reasons why they are confident in their estimate and two reasons why they aren’t. Once again, it helps them consciously challenge an estimate that may be swayed by the unconscious anchoring process.

Another exercise is to begin with extremely high and low numbers for each of the bounds of the range of an estimate. People tend to anchor on a base number and then underestimate the bounds. This exercise uses anchoring to our advantage by offsetting our natural biases. We start with extreme numbers and then adjust them toward the base or median number. By starting with these extremes, we end up with a better estimate of the median to be used in a forecast.

Groupthink

Sometimes, the only thing worse than our personal mental errors is the error compounding that occurs from a group of people. Imagine a meeting where a group is considering a new project. Someone mentions a key assumption for the forecast (e.g., sales volume, sales growth, price per unit, etc.) early in the conversation. The group tends to quickly form a consensus around that number, especially if the number was mentioned by the CEO. It’s a group form of anchoring. Then again, maybe it’s a conscious tactic by some people to end the meeting quickly so they can go back to their offices to do what they consider to be “real work.”

A way to prevent accepting the number too quickly is to draw out the opinions of those who disagree with the number. Since contradicting the CEO too much can be political suicide, the CEO or meeting facilitator may appoint one or more people to form arguments against the number. These are ways to consciously challenge an amount that’s unconsciously accepted without enough examination.

Once again, check out my behavioral finance course to learn more ways that we’re all human and make financial mistakes.

For more info, check out these topics pages:


[1] Sibony, Olivier. You’re About to Make a Terrible Mistake: How Biases Distort Decision-Making and What You Can Do to Fight Them. New York: Little, Brown Spark, 2020, p.194.

[2] p. 68 of You’re About to Make a Terrible Mistake

Get ALL the Courses Plus More in One Package

FAST (Finance and Strategy Toolkit) is the membership program that gives you resources for better strategic financial management. You get all the CFO Perspective courses. Get direct access to me as well as tools for improved decisions that can lead to improved performance.


The right tools can save you time, reduce your stress, and improve your effectiveness.

>
Success message!
Warning message!
Error message!