A balance is required between consistent decision making and speed at achieving mutual agreement.
In one study, automobile drivers were asked to rank their driving skills. The results showed that 95% of the drivers considered themselves to be above average. This is certainly inconsistent with statistical theory, but more importantly, the source of inconsistency is likely in the definition of “average” across all drivers.
Using conventional language – words such as “average” – conveys only approximation and induces inconsistency that is certainly not accurate enough for investment decisions. How can we do better?
Portfolio managers use project prioritization techniques to gain a better understanding of the relative value of projects within a portfolio. The method ranks them by assessing their impact on weighted “drivers” – i.e., measurable criteria derived from the organization’s strategy. Projects with the highest calculated contribution to the organization’s strategy make the best set of projects to select and invest in.
Ranking these drivers (to derive their weight) is an important step in the process. Doing so in a way that is consistent across multiple stakeholders requires pair-wise comparison of drivers using a hypothetical investment question similar to the one below:
If you invest $1 in the first element of the pair…
…how much would you invest in the second element of the pair?
Note the best practice terminology used: “Equally Important”, “Moderately More (or Less) Important”, “Strongly More (or Less) Important”, and “Extremely More (or Less) Important” which leverages intuitive understanding – coupled with numbers that stipulate accurate, quantified definitions aimed at reducing vagueness, and enhancing agreement and consistency in judgment.
Yet, as narrow as these definitions may be, they are still ranges; some vagueness remains and forces us to use approximation. For example, if we think that the second element is important enough to call for an investment of $5 for each $1 invested in the first, how would we label this judgment: ‘Moderately More’ (3x) or ‘Strongly More’ (6x)? We don’t have a definition for 5x. Even if we did, we’d face the same dilemma for 4x, etc.
We may attempt to create more measures, but that would end up increasing the complexity of the comparison and adding fuel to disagreements. When a group of different people thinking differently need to decide, more options can create longer, sometimes unsettling discussions. Some vagueness and approximation, in this case, may actually grease the wheels towards mutual agreement.
A normalized percentage is used nowadays to indicate how consistent we are in our judgments; how consistently we have judged the undefined areas. With multiple variables and several stakeholders, the chances of obtaining a ratio of 100% consistency is very low, perhaps impossible. We may want to make peace with the fact that 80% consistency is as good as it gets. How well can you handle that?