3 Joint Evaluation of Multiple Projects

Chapter 2 found that people struggle to aggregate risk even when provided with choice bracketing cues that could have built on an intuitive sense of how aggregation reduces risk. The finding that people are more likely to accept many gambles at once (e.g., Samuelson, 1963; Wedell & Bockenholt, 1994), even without any aids to calculate risk, suggests that people can gain an intuition for the benefits of aggregation. Yet, in the current work, people instead considered projects one at a time and only leveraged the benefits of aggregation when given an explicit visualisation of what it entails.

This shows that it is important to change organisational policy to encourage considering business projects jointly. Doing this means that the risk can be concurrently aggregated. In real-life capital allocation scenarios, when managers evaluate projects sequentially, an aggregated distribution can also be presented using any number of projects that were considered in the recent past. This means that a strategy of project risk aggregation can be implemented at any stage in an organisation’s lifespan. Relatively new ventures can implement these recommendations by waiting until a certain number of project proposals have been accrued before aggregating.

Considering projects jointly is also useful for accountability purposes. The usual incentive structure in organisations that judges each project outcome independently is likely to punish risk-taking due to its potential negative consequences and not due to the information that was available at the time of evaluation. Framing a set of projects as a portfolio means that any subsequent success or failure of one project can be traced back to the entire batch, and the performance of the whole portfolio can be evaluated.

Business projects might not always be either accepted or rejected, as in Chapter 2. Instead, top-level managers might ask for project proposals from lower-level managers, and then allocate funds from the available budget. An organisation might also have a initial “culling” phase, and a subsequent ranking phase. When initially considering a set of projects, some might be rejected according to certain rules. For instance, an NPV might not meet a certain minimum cut-off. The remaining projects in the set can then be ranked in order of priority and receive an allocation of capital from the budget.

A few potential problems arise at the point that projects are considered jointly for ranking and allocation. For instance, it might not be easy to compare between the projects in the set. As discussed in Chapter 1, diversification of business units has become very popular in large organisations. Therefore, most hierarchical organisations are likely to face difficult comparisons when deciding on how to rank and allocate capital to projects that originated in different divisions. A non-hierarchical organisation that develops one type of product may be able to simply compare across any number of intrinsic project attributes, whereas a diversified organisation is likely to have to rely on more abstract financial metrics, such as NPV. Such metrics are “abstract” because they can be applied in almost any domain.

For instance, when comparing across two oil well projects, there can be both attributes intrinsic to the project, such as the amount of hydrocarbons that are extracted per hour, and also the more abstract financial metrics. There is a potential interaction between the ease that managers have to compare across the projects and the kinds of measures that are used to make the comparison. Two similar projects, such as two oil wells can be evaluated using litres of hydrocarbons extracted per hour, whereas an oil refinery cannot. In the case that two dissimilar projects are compared, managers can use financial metrics to compare across domains. This can lead to comparable accuracy as long as the abstract metrics are as reliable as the intrinsic project features.

A concern that arises out of a reliance on such metrics is that underlying variance is not taken into account. Forecast estimates such as NPV rely on many assumptions and contain much inherent uncertainty, so managers that use them should be cautious about over-relying on them. Chapter 4 tests people’s sensitivity to forecast estimate variance information. That is, will people use NPV more when the variance information suggests that it is a reliable measure, than when the information suggests that it is unreliable?

Chapter 2 manipulated project presentation and found no significant difference between when projects were considered jointly or separately. This was explained by the bounds on people’s ability to intuitively aggregate. However, it was unclear what components of the projects people focused on both because they were not explicitly manipulated and because the task involved a binary choice (accept or reject). A relative allocation measure for multiple projects with systematically varied attributes would allow to determine the influence of those different attributes. Therefore, Chapter 4 considers the situation in which people are already presented with choices together and asked to evaluate the projects by allocating a hypothetical budget.

Further, Chapter 4 identifies the factors that affect people’s decisions independently from the potential risk of losing hypothetical money, which is a large reason for the effects in the previous chapter. Risk aversion is accounted for by making it clear that no losses are possible. This is achieved by using only positive NPVs, which implies that the project is not forecasted to lose money.

Chapter 4 also manipulates how easy the project attributes are to compare. This helps identify the ways that decision-making in a diversified organisation may be different to that of a more integrated organisation. Chapter 2 manipulated similarity by either showing a set of projects from the same industry or a set from different industries. This was meant to simulate an integrated and diversified firm, respectively. This manipulation was not as strong because there were no project attributes that could be aligned or not. That is, there was nothing actually non-alignable. This may explain the equivocal similarity effect. In Chapter 4, alignability is more fully manipulated by having project attributes be critical to the evaluation. These project features are shown explicitly so that the difficulty of the comparison is more obvious.


Samuelson, P. A. (1963). Risk and Uncertainty: A Fallacy of Large Numbers. Scientia, 57(98), 108–113. https://www.casact.org/sites/default/files/database/forum_94sforum_94sf049.pdf

Wedell, D. H., & Bockenholt, U. (1994). Contemplating Single versus Multiple Encounters of a Risky Prospect. The American Journal of Psychology, 107(4), 499. https://doi.org/10/b4fs2p