You are currently viewing Advanced FP&A practices for a volatile macroeconomic and business environment

It isn’t easy to forecast the next two to three years when macroeconomic conditions and business dynamics are profoundly complex. Yet as technology races ahead—a major (but by no means only) driver of disruption—many companies continue to build models with the same flawed approaches they’ve been using for years. Even when their forecasts do approach the bull’s-eye, critical data points and assumptions are unclear, inconsistent, or missing entirely.

The mark of a great model is that it’s not an oracle. Instead, a best-in-class forecast presents actionable insights by being transparent, consistent, and ever-improving. Projections need to be flexible and quickly adaptable. In this article, we identify six practical steps for financial planning and analysis (FP&A) teams to deliver more accurate forecasts—particularly under uncertainty.

Use a clear P value for all major assumptions and for the model overall

As we’ve noted for years, probability-weighted, scenario-based forecasts produce the most reliable results. In many FP&A models, however, probability levels are applied haphazardly or not at all, which can lead to very poor decisions. Consider a choice to allocate the same amount of capital for a 90 percent chance of $100 million net present value (NPV) over the next three years versus a 50 percent chance over the same period for $150 million NPV (that is, an expected NPV of $90 million versus $75 million). Framed that way, the choice seems obvious—or, at least, lends itself to more informed debate. But too often, probabilities aren’t stated. At one recent North American conference, about 15 percent of senior executives shared that they had never seen a probability value in a cash flow forecast.

If an FP&A department is not using explicitly stated and aligned upon P values in its forecasting, the results will likely vary widely. For example, in building a cash flow mode, a colleague from one business unit may have used a P10 (that is, a plan that has a 10 percent chance of occurring), and a counterpart from a different business unit may have used a P90 (a plan with a 90 percent chance of occurring) in their consolidating cash flow models, while neither stated the P value they used. These inconsistencies, multiplied across businesses, can lead to highly flawed results.

Being clear about P values, on the other hand, has second-order benefits. Identifying, for example, a 60 percent probability is a helpful spur to understand what it would take to achieve much higher probability (such as by diversifying suppliers, if the uncertainty hinges on supply constraints). Moreover, when one business unit head presents results by using P values and confidence levels, we tend to see that other business unit heads feel compelled to do the same. This allows both the head of FP&A and the CFO to develop a better sense of consistency across the businesses. Effective finance leaders, in fact, recognize that a discussion about P-level assumptions is in itself a process that leads to better decision making. That feeds into another, related best practice: there should always be one—and only one—owner of the model, ideally the head of FP&A. When a model lacks a single, identifiable owner who specifically vets P values, those values (even when they are indicated in the model) may not be as robust as they appear.

Lay out the true momentum case—and then show management initiatives on top of that

A great forecast starts with a do-nothing momentum case, which presents how the company’s businesses are likely to perform based on current trends if managers were to take no additional initiatives. It may sound obvious, but in practice very few businesses model a base case this way. Instead, even when a base case is shown to be declining—let alone when it is presented as plateauing or increasing—it usually builds in planned management initiatives. Strip those out. A momentum case is your bedrock; when you can’t show it starkly, every subsequent investment decision can be distorted and amplified by errors such as double counting, overly optimistic (or pessimistic) assumptions, and numbers based on nonrecurring gains or losses.

Once the true momentum case is laid out, the model should then identify management initiatives and present their potential outcomes in layers (exhibit).

Effective FP&A models save time and achieve stretch goals by clearly identifying potential outcomes.

Forecasting this way allows for greater accountability—and credit—based on how the initiatives actually fare. It also allows companies to set high but realistic stretch goals. For example, a plan could show that if a company were to introduce a high-potential new product, get to market within 12 months, and achieve a market share of 15 percent within two years, operating profit would increase by seven percentage points. If it were to beat those targets by specified amounts, the plan should show the percentage-point difference that would flow through the P&L. That level of clarity allows not just the CFO but also the CEO and other senior leaders to focus specifically on what’s needed to achieve transformative results.

Spell out the ‘bear case’

Mike Tyson, the former champion heavyweight boxer, once observed that “everyone has a plan until they get punched in the mouth.” Few things, after all, go perfectly as planned, and there’s at least a chance that a forecast (even one that makes sound assumptions, disaggregates key inputs, clearly layers management initiatives, and assigns appropriate probability levels) will fall well short of the best case—or even a good one. After all, if a forecast has a 70 percent chance of happening, that means it has a 30 percent chance of not happening.

Remarkably, we find that many companies are more likely to plan for a true left-tail event (that is, an outcome that, in a normal distribution, would occur less than 5 percent of the time) than for a not-quite-as-bad bear case, which would be much more probable. A one-in-three or one-in-four chance is not remarkable, and management should not find itself blindsided if it occurs. A great forecast explicitly lays out a bear case, showing what the likely drivers would be, what the consequences would mean for cash flow, and how future investments and decisions about capital structure could be affected. Understanding a bear case helps planners not only manage a poor outcome should it occur but also adjust probability levels as key indicators change in advance of that outcome. Too often, postmortems reveal that companies could have taken mitigating actions much sooner. They failed to do so because they never rigorously thought a bear case through.

Make macroeconomic assumptions clear and consistent

We’ve all seen them: complex, multi-input models that seem at first glance to be comprehensive. But dig a little deeper, and you can find that some of the most important assumptions are inconsistent, unsourced, or missing entirely. This is particularly true for macroeconomic assumptions.

Consider, for example, investment decisions in a country that sees wide swings in GDP. A robust model in that case should clearly highlight the country’s GDP assumptions. In fact, even when making forecasts for mature industries that operate in historically stable economies, it’s worth pressure testing whether growth (or costs) don’t appear unmoored from macroeconomic realities. Yet in too many forecasts, GDP growth is either not an input or is simply inputted without a source—or differs across business units in the same country. For example, a forecast for a consumer-packaged-goods company might assume a two-percentage-point increase in GDP growth when modeling sales of “Product A” in one country, and a five-percentage-point increase in GDP growth in the same country for sales of “Product B.” That means that at least one of these forecasts is flawed. Possibly, they both are. While not all industries are highly dependent on GDP (high-tech and pharmaceutical companies, for example, are much less affected than companies in, say, the banking or construction sectors), a good rule of thumb is to at least question how changes to GDP or other macroeconomic factors have historically affected the business—and include in your projections the changes that have been most material.

Moreover, all macroeconomic assumptions used should be sourced and kept consistent within their specific country. Better practice is to adopt a median from a few credible, independent sources, making the methodology transparent or, better yet, allowing the model to weight those sources by historical accuracy. Best practice is to look at the patterns of assumptions and compare them with actual results so leaders can learn and ensure that their models learn as well.

Disaggregate inflation rates—‘average’ inflation can be wildly inapplicable

In our experience, companies typically include industry-specific metrics that matter most for their businesses, such as revenue per available room if they manage a chain of hotels, the size of a country’s carbonated-beverage market if they manufacture soft drinks, and the costs of key resources. Sophisticated companies also include publicly available competitor data.

But as well as they know their own businesses, they often make a common, critical mistake when modeling one driver—inflation. In too many forecasts, the inflation rate is presented as a single number, such as the consumer price index (CPI), across every business in a single country (and sometimes worldwide). But why? In the United States, the CPI is a basket of approximately 94,000 goods and services. Large companies don’t buy baskets. Instead, they depend heavily on a few, select components whose individual inflation rates can vary substantially. While businesses are at least indirectly affected by hundreds or even thousands of those components, the Pareto principle invariably applies: a few goods or services have tremendously outsize effects. These should be modeled based on disaggregated inflation rates.

As is the case with macroeconomic assumptions, inflation and industry-specific sources should always be clearly identified and can be weighted and automatically adjusted. And though it seems extraneous to add, disaggregated inflation rates should match not only across business units in the same country but also on the revenue and cost sides. Sometimes, we see models where inflation rates for a single product in a single country fail to align—typically because one part of the model defaults to a broader average, and another incorporates a more specific number. That means there’s a mistake. Fortunately, it’s easily fixed, particularly when sources are clearly indicated.

Relentlessly back test models and reduce variances

Building a model is a process; the point is not just to produce accurate numbers but to generate a constant, evolving series of outputs that become even more accurate, more rapidly, as the forecasted period plays out. It should be obvious that a sophisticated model cannot be one that you “set and forget.” On the contrary, setting up the model is just the first step. Projections can be compared with actual results every week—and, sometimes, more often than that.

Sophisticated back testing delves into aggregating details. It may be, for example, that overestimations of some stores canceled out underestimations of others, or that cost of goods sold was very close to the forecasted amount, but components of, for instance, SG&A varied wildly—and that the divergence was netted out by interest expenses that were higher (or lower) than predicted. Companies can’t manage a variance if they don’t measure results with granularity.

As teams examine variances, they can often identify clear patterns. In one North American–based consumer-packaged-goods company, for example, back testing revealed that forecast monthly sales were overestimated by about five percentage points, month after month—a variance that proved easy to address but could have been corrected much earlier if the team had been conducting back testing every week. Moreover, back testing enables companies to get smarter about how much weight to assign a given component used to make macroeconomic assumptions (such as forecasts of GDP or inflation) and continually fine-tune the model. Revisiting assumptions has always been best practice, even if the process was tedious and too infrequently undertaken. Today, back testing can be automated significantly, and patterns can be identified more precisely by using AI, including generative AI.


It’s hard to make an accurate forecast in the easiest of times; it’s almost impossible when conditions are uncertain. But even—indeed, especially—in the face of tremendous complexity, FP&A teams can take specific actions to achieve clearer insights. Six of the most consequential practices are remarkably straightforward. CFOs and their FP&A teams can begin adopting them today.

McKinsey & Company

“Our firm is designed to operate as one—a single global partnership united by a strong set of values. We are equally committed to both sides of our mission: attracting and developing a talented and diverse group of colleagues and helping our clients create meaningful and lasting change.

From the C-suite to the front line, we partner with clients to help them innovate more sustainably, achieve lasting gains in performance, and build workforces that will thrive for this generation and the next.”

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.