Increasing Forecasting Confidence with Sensitivity Analysis

Credit: Emily Guillemot

Credit: Emily Guillemot

Increasing Forecasting Confidence with Sensitivity Analysis

The future is inherently uncertain. Why should our forecasts be any different?

We cannot know, with certainty, how the future will unfold for our companies.  We are not fortune-tellers.  Forecasts will almost always be wrong because they assumptions we make about the future will be incorrect and frequently change.  However, that truth should not diminish the usefulness of forecasts for operational planning and decision-making. 

For the benefit of financial leaders, we should never merely provide them with a financial forecast without having deep insight into the drivers and risks involved and how they may change.  We should seek to thoroughly understand the sources of the data, the basis and quality of our assumptions, and how changing assumptions may ultimately impact the conclusions we make.  Finally, given the inherent uncertainty in forecasting, it’s vital to sensitize variables and determine the extent to which they impact identified outputs.

A sensitivity analysis contemplates the extent to which outcomes are impacted by key variables and business drivers.  At face-value, it is a straight-forward technique – we can examine how a range of a certain input impacts certain outputs. For example, if a company wants to gauge how likely it is to hit a targeted EBITDA twelve months into the future, it must understand the drivers and inputs that influence this metric.  EBITDA is a product of many changing assumptions including, but not limited to: sales mix, variable margin, SG&A, fixed costs, and other overheads.  To run a sensitivity analysis, we would flex each of these individual variables and examine the extent to which they influence the output. 

When I advise financial planning professionals, I encourage them to understand two key elements of business driver’s sensitivity: a) the extent of their variability, and b) the extent of key driver impact.  Thus, there are four possible combinations for what I call “Sensitivity Quadrants”:

1.     High variability, high impact

2.     Low variability, high impact

3.     High variability, low impact

4.     Low variability, low impact

Within this quadrant analysis, I further encourage a qualitative assessment to understand risk management:

1.     What can be controlled and to what extent?

2.     What cannot be controlled to any extent?

3.     What may be difficult to control and to what extent?

Here’s an example of a forecasting process I provided for a former client.  Assume, for the sake of this example, we are forecasting company-wide cell phone expense and direct labor costs for a manufacturer of industrial fasteners and bolts. 

At the outset, cell phone expense was given a Level 4 classification (low variability, low impact).  The rate should be known in advance, or allowing for an advance-purchase discount, and the headcount should be provided by HR and it’s hiring/severance expectations.  Both factors are largely controllable.  Cell phone expense is not a direct cost, has little impact on strategic decisions, and is unlikely to change period-to-period.  Therefore, when forecasting this line-item, we can likely project it with a high-degree of precision, either based upon historical run-rates or as the product of future estimates of headcount and monthly rates.  Because of the low variability and low impact, I considered a sensitivity analysis largely unnecessary.

Direct labor costs were given a Level 1 classification (high variability, high impact) with aspirations to improve it to a Level 2 (low variability, high impact) through more effective planning and cost-control.  As a percentage of revenue, direct labor equated to a challenging 22% (or 22 cents spent on direct labor per one-dollar of sales).  I considered this to be high impact, compared to peers and across the industry.  Labor rates were generally known in advance; however, the volume of hours – time spent on the manufacturing process – was highly variable since the number and size of projects for future months was constantly changing.  In contrast to cell phone expense, conducting a sensitivity analysis on direct labor costs was vital.  The sensitivity would illustrate at what levels of project volume, and over what period of time, were we likely to see corresponding labor shortages and overtime.

Conducting a sensitivity analysis is the easy part – interpreting its findings and acting upon it is the hard part.  What the sensitivity analysis showed was that certain, known levels of manufacturing activity exacerbated labor inefficiencies.  One remedy we identified was to allocate labor hours across projects in similar stages of completion.  In short order we saw a wide range of unexpected hours, which led to increased overtime spend and a need for additional labor, narrow to more manageable levels.  While we certainly could have come to this operating conclusion in absence of a sensitivity analysis, the tool was what helped us put this decision into more insightful context.

The most commonly-used Excel tools for sensitivity analyses are: a) data tables (located within the ‘what-if’ menu of the Data Ribbon), and b) self-referencing IF-formulas (ie. IF input A=1 and input B=2 and input C=3, then the output is X).  These tools are easy to use and can be used in a multitude of applications.  For forecast models that are simple or where an analyst seeks to analyze a small number of variables, these tools are excellent.  However, financial modelers who are accustomed to modeling in Microsoft Excel or traditional spreadsheet software may oversimplify sensitivity analyses based upon the tools conveniently provided. 

Because these tools are rudimentary, they quickly become cumbersome when more than two input values vary concurrently, as is the case in most business scenarios.  Forecasts are often complex and consider a wide range of simultaneously-changing variables.  In fact, in my advisory work, financial leaders often deflate these basic sensitivity analyses with one question: “what’s the likelihood of that outcome?” 

Rather than rely on basic tools for advanced forecasting, companies should consider employing probabilistic sensitivity analyses, which simultaneously contemplate uncertainty across all variables.  The extent of the variability must be defined, either through use of known or projected distributions.  Known distributions may be based upon historical data or behavior; unknown distributions may be based upon human best-guesses or computer-generated estimates.  Once base parameters are defined, the probabilistic sensitivity analysis will demonstrate not only the range of possible outcomes given changes in key drivers, but also the likelihood of those outcomes being realized, thus addressing one of the core questions decision-makers are likely to ask.

In the direct labor example, we could determine that we were likely to experience a labor shortage in approximately 25%-30% of manufacturing runs, which ultimately meant delays in project completion and an erosion of gross margin.  By knowing the probability, or likelihood of this issue, we can determine the extent of the impact and how we wish to remedy.

Despite our human abilities and the capabilities of software, analysts should be careful not to overwhelm their planning models or analysis by sensitizing too many variables.  In doing so, they may lose sight over which elements are most important to manage.  In the example above related to EBITDA, because sensitizing may have been performed on so many drivers, we may inadvertently minimize the impact of one element (such as sales mix to EBITDA) while unconsciously maximizing others. 

Monte Carlo and similar tools can help an analyst determine which variables have the greatest impact on outcomes, thus allowing the analyst to pinpoint which variables should be sensitized.  Instead of merely reporting what the planning models or analyses say, by contemplating sensitivities and understanding them thoroughly, financial professionals can make better recommendations.  Further, by knowing which variables can be controlled and to what extent, these recommendations will help the organization stay on track toward the outcomes it desires.

In summary, sensitivity analysis may be performed on virtually any financial forecast and on any line-item.  When conducting such an analysis, be prudent in selecting the inputs that have the greatest amount of variability and/or impact to determine which deserve the most attention.  Finally, recognize that many analysts myopically view sensitivities merely as a mechanical exercise in displaying how a range of inputs impacts outputs.  The greatest value in sensitivity analysis is understanding what the results mean and how to influence performance.

 

the statement

Sign up for The Statement and “Seidman Says” and get exclusive insights INTO FP&A and other commentary from Carl Seidman


 
Carl SeidmanComment