Benefits Realization and the Illusion of Exactness

Email this to someoneShare on LinkedInTweet about this on TwitterShare on FacebookShare on Google+
benefit measurement
7 things I learned from listening to 12 PPM executives discuss benefit estimates and benefit measurement… especially benefit measurement.​

I recently had the privilege of being a fly on the wall at a professional roundtable meeting of a dozen PMO, program management, and transformation executives from large American companies. For almost three hours, the lively discussion circled around merits, challenges, and best practices of realizing benefits throughout a project’s lifecycle: from intake and business case construction, through project execution, and out to releasing deliverables into the wild to generate value. The conversation covered all aspects of the topic, however participants seemed to be most passionate and concerned about correctly measuring benefits - or the lack thereof. Here are some key takeaways:

1.       Why measure? It’s perhaps best to start with the rationale behind measuring actual realized benefits against the rosy picture of the early estimates and promises used to secure funding for a particular project. The textbook answer to this question is one of continuous improvement, historical lesson learning and proving whether projects were justified in hindsight.

The problem with those motivations is that we’re often talking about potentially 2-3 years before payoff, considering the lifecycle of project deliverables (more on the time aspect in a moment). Very often, stakeholders that could be held accountable for a project’s performance have scattered to other endeavors after that long a time period, making the exercise potentially moot. The organization has ostensibly moved on.

More interesting answers focused on the fact that institutionalizing measurement instills a culture of better performance from the get-go and makes estimates delivered during the business case phase more realistic by default.

2.       To kill or not to kill? Another oft quoted motivator for measurement is that of killing projects that don’t deliver as soon as possible, so that resources can be redirected elsewhere. While theoretically viable, it seemed, listening to the participants, that killing projects is easier said than done. There’s a significant emotional aspect that needs to be considered and far less projects end up being killed than one would think. But, killing projects that don’t measure up might not be the ultimate goal. Rather, increasing the quality of planning and setting programs up for success might be a more laudable one.

3.       How much time? What is the optimal amount of time to accrue benefit measurements before you can reach the correct conclusions? If a project’s ROI horizon is three years, should you wait to collect three years’ worth of data? Most folks thought that was impractical and of little value.

One key consideration was that as more time goes on, it gets harder to tease out the specific contribution of one project to the benefits measured. For example, should all the sales revenue accrued by a new product over 2-3 years be solely attributed to the initial development project? Probably not. Furthermore, in this fast paced world of constant market and business changes, how likely are benefits articulated three years ago to be even relevant today? The high end for measurement time proposed in the room was two years, but numbers as low as three months were shared as well (although that required a novel approach to metrics – see the next bullet.)

4.       Leading or lagging? The classic approach to measuring the actual value a project delivers seems to be one of “lagging indicators”, i.e. one that only provides answers after the fact. Using the previous example, one only knows if all the revenue a new product promised was delivered after measuring three years’ worth of sales numbers.

A better approach would be to instruct teams to stipulate “leading indicators” of success to measure their projects’ benefits so that success (or potential failure) is detected early on. E.g., if the organization has good data on sales trends and product revenue behavior over time, one could look at revenue trends over a period of only one or two quarters to estimate whether realization is on track. As mentioned earlier, the shorter time period reduces the risk of “contamination” by extraneous factors. It’s also better for team morale.

5.       What and how? If success metrics aren’t defined and scrutinized for viability early on, there’s little likelihood that estimations will be realistic or measurements effective. This means that what is being measured as a proxy for benefits and how it’s being measured need to be part of the business case approval process. If accrued product sales is the metric, one should ask: How is it being measured? (A potentially good answer: from the pre-existing process of delivering ERP sales figure reports that are prepared monthly for the CFO.) Is there a baseline revenue number? (If not – there’s no way that improved sales can be articulated in a meaningful way.)

6.       Execution too? Organizations typically focus on “OTOBOS” during the pre-delivery, project execution phase: it is tracking for on time, on budget and in scope delivery. However, the ultimate goal of a project is to deliver benefits, so it stands to reason that it’s just as important to place a constant finger on the pulse of whether the project is still on track to deliver promised benefits and whether those benefits are still accurate and relevant. Granted, that’s fairly hard to do if a project is only slated to deliver value after it’s completed; thus, measurement is often subjective. Objectivity is sometimes attainable, however. Revenue benefits, for example, are negatively impacted by schedule delays (or positively by early delivery), in quantifiable and somewhat precise ways.

7.       “Perfectly wrong”? One often sees absurdly accurate numbers articulated in the estimated benefits section of a business case. “This new product will increase North American sales in the coming two fiscal years by $3,422,00132.” Using precise numbers to estimate returns and articulate required investment creates the “illusion of exactness”, but is in reality “being perfectly wrong”, to quote a couple of well-versed folks at the roundtable. During the estimation phase, there is usually a great deal of uncertainty around the numbers driving ROI.

A good intake process should embrace that uncertainty rather than force exact numbers for convenience sake. Including stochastic analysis and including ranges or expected numbers with certainty ranges (e.g. “$3M-4M”, or “$3.5M ± $500K”) provides a much more realistic picture of the data for decision making purposes. Estimates can be adjusted as time goes by and measurements can be compared to see if they fall within the estimated range. This allows for meaningful measurement of success metrics – although it does require a potential shift in perspective, especially where the culture normally dictates precise numbers.

Care to share your thoughts? How does your organization estimate and measure realized benefits? How would you like to?