7 Discussion

This thesis investigated the psychology of capital allocation decisions. The influence of psychological factors on such decisions has not been sufficiently considered in the literature despite their importance to the performance of hierarchical organisations. This discrepancy is likely due to a greater focus of the role of organisational influences on firm performance in the management literature. The thesis did not investigate expertise effects, but instead focused largely on participants without management experience. This allowed a study of the specific cognitive processes without the potential confound of experience. Though, it is also worth noting that, in the one case where the work examined people with management experience, the pattern of results was largely the same as with naive participants. Each of the empirical chapters investigated distinct but related processes that are relevant to the capital allocation process. These chapters investigated whether people were able to account for the benefits of aggregation when considering multiple projects (Chapter 2), the influence of project feature alignability and metric variance when comparing projects directly (Chapter 4), and the influence of project anecdote similarity when the anecdote conflicts with statistical evidence (Chapter 6). Section 7.1 will first summarise the results of the empirical chapters, and Sections 2.4.1 and 2.4.2 will then discuss their theoretical and practical implications, respectively. Section 7.4 will conclude the thesis.

7.1 Summary of Results

Chapter 2 investigated participants’ choice of risky business projects, when these are displayed sequentially and without feedback in between decisions. This design modelled the real-life situation that managers face in hierarchical organisations: an evaluation of a set of separate business project proposals over time with no immediate indication of the performance of those projects. Aggregating a portfolio of such projects is likely to show a lower chance of potential loss overall than might be originally assumed. The results from this chapter showed that people not only did not do this spontaneously, but also were not facilitated by manipulations that encouraged grouping choices together as a portfolio. People only seemed to recognise the benefits of aggregation when they were presented with an outcome probability distribution of the aggregated set of projects. There was no strong evidence that more subtle manipulations aimed at encouraging aggregation worked. Specifically, presenting projects together, specifying the total number of projects, and presenting projects that were all from the same industry did not reliably encourage aggregation.

Chapter 4 investigated capital allocation when projects were evaluated jointly and capital was allocated as a proportion of the budget, rather than a binary choice. The main manipulation was whether all the project attributes were alignable, or only the abstract financial metric (NPV) was alignable. NPV was also manipulated to be considered as either a reliable metric or not. This information was expressed either as explicit verbal instruction or as numerical ranges. The results showed that when reliability information was presented verbally, participants used NPV appropriately when all project attributes were completely alignable. That is, they used it when it was reliable and used the intrinsic project features when it was unreliable. When only NPV was alignable, participants relied on it almost exclusively. However, when reliability information was presented numerically, participants’ allocation did not depend on the ranges—participants used NPV even when they had an opportunity to use the intrinsic features of the project. Overall, however, participants relied on NPV more when projects were low in alignment than when they were high in alignment.

Chapter 6 investigated the effect of anecdote similarity on allocations when the anecdote conflicted with the statistical data. Participants were asked to allocate a hypothetical budget between two projects. One of the projects (the target project) was clearly superior in terms of the provided statistical measures, but some of the participants also saw a description of a project with a conflicting outcome (the anecdotal project). This anecdotal project was always in the same industry as the target project. The anecdote description, however, either contained substantive connections to the target or not. Further, the anecdote conflicted with the statistical measures because it was either successful (positive anecdote) or unsuccessful (negative anecdote). The results showed that participants’ decisions were influenced by anecdotes only when they believed that they were actually relevant to the target project. Further, they still incorporated the statistical measures into their decision. This was found for both positive and negative anecdotes. Further, participants were given information about the way that the anecdotes were sampled that suggested that the statistical information should have been used in all cases. Participants did not use this information in their decisions and still showed an anecdotal bias effect. Therefore, people seem to appropriately use anecdotes based on their relevance, but do not understand the implications of certain statistical concepts.

Together, these results show the bounds of people’s decision-making capability in capital allocation. The participants in these experiments in general behaved rationally but struggled to incorporate certain statistical concepts into their decisions. Further, when confronted with multi-attribute choice, participants tended to allocate capital using a trade-off strategy. This was seen in the conflict between intrinsic project attributes and NPV in Chapter 4 and the conflict between the anecdotal and statistical evidence in Chapter 6. Participants’ allocations were informed by relevant factors when these were sufficiently clear (as in the verbal reliability condition in Chapter 4). However, participants struggled to do this when the factor involved using a relatively basic statistical concept. Each empirical chapter included such a concept: risk aggregation in Chapter 2, metric variance in Chapter 4, and sample distribution in Chapter 6. The aggregated distribution in Chapter 2 and the verbal reliability manipulation in Chapter 4 showed that a formal understanding of such concepts is not always necessary if they are expressed explicitly.

The statistical concepts used in these studies are all likely accessible for people without much formal mathematical knowledge. A basic concept of risk aggregation is clearly available to laypeople as seen in the responses to multi-play gambles (e.g., one vs. 100 gambles). Further, people certainly have a basic understanding of numerical ranges and that a wider range means more spread. Despite likely having this understanding, participants in the above experiments were unable to use it in the decisions. Similarly, other work has shown that generalisations are sensitive to sampling (Carvalho et al., 2021). Therefore, it is unlikely that the people in the thesis experiments simply lacked any understanding of these statistical concepts or (at least sensitivity to this kind of information). Instead there appear to be important contextual factors that critically support or prevent people from showing their intuitive understanding. Unfortunately, the methods used in this thesis more closely resemble real decisions managers make than the prior research that showed people do reason with these kinds of statistical concepts. Further, it is not clear that these effects will simply disappear with just more maths knowledge and business experience. Previous work showed that expertise does not always remove biases and in some cases it seems to augment such effects (e.g., Haigh & List, 2005).

7.2 Theoretical Implications

The main theoretical contribution of this thesis is the addition of evidence that further specifies the conditions under which people make rational decisions in capital allocation scenarios. People made good decisions most of the time, but sometimes do not use relevant information in their decisions. Amos Tversky explained in his response to Cohen (1981, p. 355) that the work on heuristics and biases “portrayed people as fallible, not irrational.” That is, people are not constantly making mistakes, but often behave rationally, largely due to adaptive heuristics. However, sometimes shortcuts that are usually helpful can fail. Studying such biases is similar to the way that optical illusions help understand the visual system. In both cases, these are systems that most of the time function properly, but sometimes reveal deficits.

Similarly, Simon (1955) identified human rationality as bounded, meaning that people’s cognitive processes are limited. The main aim of the thesis was to contribute evidence for the ways that capital allocation decisions are bounded. To this end, in each experiment, participants were given capital allocation scenarios alongside both cues that describe their options and cues that frame the options in different ways. Identifying which cues were used by participants in their decisions, which cues were ignored, and which cues were integrated allowed to specify the bounds of people’s cognitive capacity in these decisions. The experiments showed that people struggle to use certain statistical concepts in their decisions, but that they are also capable of making nuanced trade-offs and can be assisted by decision aides. Understanding how decision-making in capital allocation is constrained and biased is important in order to improve decision-making. Even if decisions are largely consistent with normative concepts, falling prey to the biases identified in this thesis can have severe consequences for organisations.

7.2.1 Statistical Concepts

Chapter 2 presented participants with a capital allocation situation in which an understanding of risk aggregation would have led to beneficial outcomes. Investing in all the hypothetical projects would have led to a much higher chance of gaining money than losing any. Each choice bracketing manipulation provided a hint of the possibility of combining the choices in this way. However, participants did not need to compute the aggregated value of the prospects themselves. An intuitive understanding of aggregation involved considering that some of the gambles will pay-off and make up for those that lost. However, this was not seen, with only weak evidence that people were influenced by the more subtle choice bracketing manipulations. Instead, people only seemed to respond to the concept of aggregation when it was explicitly showcased. Showing people a distribution of the outcome probabilities explicitly visualised the extent to which an aggregation of the risks can lead to an incredibly low chance of loss.

In Chapter 4, the NPVs that participants saw were critical to the allocation task. In the low alignment condition, NPV was the only alignable attribute in the comparison. In the high alignment condition, however, NPV was in competition with the intrinsic project feature values. An understanding of how to use numerical variance would have allowed participants to allocate capital according to the implied reliability of the comparison metric. In the low alignment condition, NPV was the only easy way to compare across projects, so it was a more useful cue than the rest of the non-alignable values. However, in the high alignment condition, the extent of numerical variance associated with each NPV could have been used to determine NPV reliability. There were two ways to do this: (a) noticing that in the low numerical reliability condition the ranges were all overlapping, and (b) noticing the difference in the width of the ranges between the two within-subjects reliability level conditions. By doing this, participants would have then been able to know to (in the high alignment condition) use NPV when ranges were narrow and use the intrinsic values more or exclusively when ranges were wider and overlapping. Participants were able to do this sort of conditional allocation when reliability was expressed explicitly as words, but not when it was expressed numerically.

In Chapter 6, participants did not make use of descriptive information about the anecdote sample distribution. As in Chapter 4, participants were confronted with a conflict of cues: statistical information vs. a potentially relevant anecdote. Regardless of the similarity manipulation, a consideration of the sample from which the anecdote was sampled should have informed how the anecdote was used. Imagine a distribution that represents the similarity of all the individual projects in the sample. That is, the x-axis represents the similarity to the target project and the y-axis is the frequency of projects that represent each level of similarity. Even if the sampled anecdote appears very relevant to the target project, if the underlying distribution of the sample is highly negatively skewed, such that most projects in the sample are equivalently similar to the target, then the sampled anecdote is not necessarily more informative than the aggregated measure. On the other hand, if the underlying distribution is positively skewed, normally distributed, or even uniform, then the fact that the sampled anecdote appears highly relevant to the target project may actually mean that it is more informative than the aggregated measure. Prior work shows that people can reason about distributions effectively when experiencing the sampling directly (e.g., Hertwig et al., 2004; Carvalho et al., 2021). Chapter 6 shows that people struggle to use this information when it is described verbally.

While people struggled to understand and use certain statistical concepts they still seemed to be able to integrate multiple cues and create trade-offs. As discussed in Chapter 5, both Chapters 4 and 6 provided participants with more than one cue to use for project evaluation. In Chapter 4, people seemed to strike a trade-off between NPV and the intrinsic project features as opposed to choosing one or the other with a consistent strategy. In Chapter 6, the anecdotal and statistical evidence provided conflicting cues for each target project. However, participants allocated as if both the anecdotes and statistics had some relevance. Similar to the above, participants appeared to integrate the influence of these two cues, as opposed to picking a consistent evidence reliance strategy for their allocation decisions. These findings might be explained through satisficing (Simon, 1955) or a constraint satisfaction model (e.g., Glöckner et al., 2014). Future research can test these explanations, as well as further clarify to what extent constructs such as need for cognition or mathematical skill may explain individual differences.

7.2.2 Decision Aides

While trade-offs allow people to integrate multiple cues, decision aides allow people to use statistical concepts for more nuanced decision-making. Chapter 2 found that people’s understanding of risk aggregation was facilitated when the mathematical work was done for them and the aggregated values were displayed visually as a distribution. However, a follow-up experiment to Chapter 4 (detailed in Appendix B.7) found that even explicit instructions sometimes do not work. That is, even explaining the way that ranges can be used as reliability information and telling participants how to implement this in the capital allocation task did not facilitate proper use of ranges.

Future work should investigate the impact of visualisation on people’s use of variance information in these situations. Much work has investigated visualising uncertainty information (Bostrom et al., 2008; Brodlie et al., 2012; Davis & Keller, 1997; Johnson & Sanderson, 2003; Kinkeldey et al., 2017; Kox, 2018; Lapinski, 2009; Lipkus, 2007; Lipkus & Hollands, 1999; MacEachren, 1992; Padilla et al., 2018; Pang et al., 1997; Potter et al., 2012; Ristovski et al., 2014; Spiegelhalter et al., 2011; Torsney-Weir et al., 2015). For instance, a Hypothetical Outcome Plot (Hullman et al., 2015; Kale et al., 2019) expresses variance information as dynamic plots and is one method that is likely to be beneficial to people’s understanding of ranges as used in this thesis. Visualisation could also apply to the work in Chapter 6. Using a visual array as in Jaramillo et al. (2019) is likely to facilitate people’s understanding of the importance of statistical evidence over anecdotes. However, an additional visualisation of the distribution of the underlying similarity to the target may also be necessary to facilitate understanding of the relevance of the sample distribution. Ultimately, visualisations of the effects of certain statistical concepts may be necessary for people to use them appropriately.

7.2.3 How Bounded is Bounded Rationality?

The boundary between the cues that participants were able to use and the statistical concepts that they did not use is unclear. That is, the cues that they were able to use were not trivial, and the concepts that they were not able to use are relatively basic. For instance, the finding in Chapter 6 that people were able to use relevance information to guide their allocations shows an ability to quite specific information to inform choice. On the other hand, the statistical concepts that participants ignored in each empirical chapter are all relatively intuitive. While concepts of aggregation, variance, and sample distribution are typically studied at a tertiary level, they can be understood when acted out or experienced.

Clark and Karmilff-Smith (1993) proposed a distinction between two levels of representing knowledge. At the implicit level an individual can only make use of a certain system of knowledge, while it is only at the explicit level that they develop insight into that system. For instance, young children can use closed class words such as “the” or “to”, but only identify them as words later in development. Further, children’s play often implicitly contains many mathematical concepts, despite the children’s struggle to explicitly reason with the exact same concepts in more formal problem-solving (Sarama & Clements, 2009). Adults may have a similar distinction in knowledge representation. Concepts that can be used when experienced directly, such as in risky choice from experience, are not represented in a way that they can be used when presented descriptively, such as in risky choice from description. This kind of distinction may explain why participants in the thesis experiments failed to use concepts that have been shown to be accessible to laypeople.

7.2.4 Expertise Effects

Future research should investigate the potential expertise effects that may influence the findings of the thesis. This is important because of the potential downstream effects of biased managerial decision-making. For instance, it is unclear to what extent psychological factors such as the ones discussed in this thesis may account for the finding that undiversified firms often perform better than diversified firms. On the one hand, business professionals tend to work with numbers, so the effects found in this thesis may be less pronounced for them. For instance, Smith and Kida (1991) reviewed the heuristics and biases literature and concluded that certain cognitive biases are not as strong for accounting professionals as they are for naive participants.

On the other hand, these effects may actually be stronger in managers. For instance, Haigh and List (2005) found that professional traders show more myopic loss aversion than students. Chapter 2 showed that people tend to consider risky choices one at a time and therefore tend to be more risk averse to a set of projects than they would be if the risks were aggregated. Managers might be even more risk averse in these situations because of the increased stakes for their jobs. Lovallo et al. (2020) discussed the ways in which managers tend to have a blind spot for such project evaluations: they aggregate their personal stock market portfolio, but not their intra-firm project portfolio.

Chapter 4 found evidence of variance neglect for both laypeople and Master of Management students. Further, in the case of the work in Chapter 6, it is possible that business managers prefer anecdotal cases to inform their decisions because of their higher salience, compared to statistical data. Managers are also more likely to feel as if the situation is relevant to them, which acording to Freling et al. (2020) would predict more anecdotal bias.

7.3 Practical Implications

The findings of this thesis have a number of potential implications for managerial decision-making. Despite the uncertainty about potential expertise effects, this section assumes that the findings of the thesis generalise to experienced managers, if not in degree, at least qualitatively. Management researchers have suggested ways of overcoming psychological biases in managerial decision-making ever since such biases were identified. Many practitioner-oriented papers have used the findings of the judgement and uncertainty literature both to explain managerial decision-making processes and to suggest ways of reducing bias (Courtney et al., 1997, 2013; Hall et al., 2012; Koller et al., 2012; Lovallo & Sibony, 2014; Sibony et al., 2017), with only some specifically focused on capital allocation decisions (Birshan et al., 2013). This section will review some of the implications the findings of this thesis may have on both organisational policies and manager decision-making.

The findings of Chapter 2 show that the framing of business project proposals is important for the way that people perceive their risk. Specifically, in order to better account for the risks of business projects it is important to (a) make it easier for managers to group projects together, and (b) aggregate a portfolio of projects for them. This suggests implementing organisational changes that will facilitate the capital allocation process. For instance, Lovallo et al. (2020) suggested that companies change the frequency that they evaluate projects to better allow for an aggregation of the projects. Doing this will enable an explicit computation of the aggregated values and therefore a visualisation of the outcome probability distribution. Such a process could facilitate aggregation without a need to rely on managers’ intuition during sequential project evaluation decisions.

One implication of Chapter 4 is that it is important to expose the variance that underlies abstract financial measures. Further, translating such numerical variance estimates into clear verbal information would help facilitate managers’ understanding and implementation of such estimates. Organisational changes could include reducing diversification so that there is less reliance on abstract metrics. This would allow for more of a comparison between alignable project attributes, potentially reducing forecast error. Koller et al. (2017) found that companies with more similar business units report faster growth and greater profitability than competitors, compared with companies with dissimilar business units. Further, companies can also work to develop better metrics and establish norms about how much to discount a metric given its underlying variance.

The main implication of Chapter 6 is that managers should pay attention to the way that they compare target projects to other cases. It is important to collect prior cases that are relevant, and to have as many such cases as possible. Ideally, each such prior case should be weighed by similarity (Lovallo et al., 2012). If this is done, the prior distribution of the similarity of the sample would be taken into account when computing subsequent aggregation. When identifying such similarity ratings, it is important to focus on relevant underlying structure. This would reduce any erroneous connections to cases that only have a mere surface similarity. This distinction is also relevant in a situation in which only one prior case can be found. Research on analogy shows that analogical comparison helps expose the underlying relational structure between objects (e.g., Kurtz et al., 2013; Markman & Gentner, 1993). Therefore, managers should take care to first identify such relational structures first before making subsequent inferences.

Addressing these psychological effects will help eliminate some of the biases in the capital allocation process, but will not address other related biases. For instance, the above effects all involve decisions that require an evaluation of financial forecast estimates such as future cash flows and the related uncertainty. Therefore, a further source of error could arise from the initial estimation of these probability and cash flow values. For instance, such estimates could be influenced by optimism or confidence biases. These biases, however, can in turn also be addressed (Flyvbjerg et al., 2018).

7.4 Conclusion

Capital allocation decisions are consequential for large organisations. This thesis tested the conditions under which people behave rationally or are fallible when allocating capital. The experiments found that participants struggle to incorporate concepts such as risk aggregation, estimate variance, and sample distribution into their decisions. Participants only seemed to be able to do this when the concept was expressed visually very explicitly. However, when there were multiple cues for choice evaluation, the results also showed that participants were capable of integrating conflicting information in their decisions. Identifying such cognitive bounds helps to better understand how people evaluate multiple choices and helps future research develop methods to facilitate better decisions.


Birshan, M., Engel, M., & Sibony, O. (2013). Avoiding the quicksand: Ten techniques for more agile corporate resource allocation. McKinsey Quarterly. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/avoiding-the-quicksand

Bostrom, A., Anselin, L., & Farris, J. (2008). Visualizing Seismic Risk and Uncertainty: A Review of Related Research. Annals of the New York Academy of Sciences, 1128(1), 29–40. https://doi.org/10/c32d9k

Brodlie, K., Allendes Osorio, R., & Lopes, A. (2012). A Review of Uncertainty in Data Visualization. In J. Dill, R. Earnshaw, D. Kasik, J. Vince, & P. C. Wong (Eds.), Expanding the Frontiers of Visual Analytics and Visualization (pp. 81–109). Springer London. https://doi.org/10.1007/978-1-4471-2804-5_6

Carvalho, P. F., Chen, C.-h., & Yu, C. (2021). The distributional properties of exemplars affect category learning and generalization. Scientific Reports, 11(1, 1), 11263. https://doi.org/10.1038/s41598-021-90743-0

Clark, A., & Karmilff-Smith, A. (1993). The cognizer’s innards. Mind & Language, 8(4), 487–519. https://doi.org/10/csfpck

Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral and Brain Sciences, 4(3), 317–331. https://doi.org/10/fn9rpc

Courtney, H., Kirkland, J., & Viguerie, P. (1997). Strategy under uncertainty. Harvard Business Review, 75(6), 67–79. https://hbr.org/1997/11/strategy-under-uncertainty

Courtney, H., Lovallo, D., & Clarke, C. (2013). Deciding How to Decide. Harvard Business Review, 91(11), 62–70.

Davis, T. J., & Keller, C. P. (1997). Modelling and visualizing multiple spatial uncertainties. Computers & Geosciences, 23(4), 397–408. https://doi.org/10.1016/S0098-3004(97)00012-5

Flyvbjerg, B., Ansar, A., Budzier, A., Buhl, S., Cantarelli, C., Garbuio, M., Glenting, C., Holm, M. S., Lovallo, D., Lunn, D., Molin, E., Rønnest, A., Stewart, A., & van Wee, B. (2018). Five things you should know about cost overrun. Transportation Research Part A: Policy and Practice, 118, 174–190. https://doi.org/10/ghdgv4

Freling, T. H., Yang, Z., Saini, R., Itani, O. S., & Rashad Abualsamh, R. (2020). When poignant stories outweigh cold hard facts: A meta-analysis of the anecdotal bias. Organizational Behavior and Human Decision Processes, 160, 51–67. https://doi.org/10/gg4t2f

Glöckner, A., Hilbig, B. E., & Jekel, M. (2014). What is adaptive about adaptive decision making? A parallel constraint satisfaction account. Cognition, 133(3), 641–666. https://doi.org/10/f6q9fj

Haigh, M. S., & List, J. A. (2005). Do Professional Traders Exhibit Myopic Loss Aversion? An Experimental Analysis. The Journal of Finance, 60(1), 523–534. https://doi.org/10/c7jn9k

Hall, S., Lovallo, D., & Musters, R. (2012). How to put your money where your strategy is. McKinsey Quarterly. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/how-to-put-your-money-where-your-strategy-is

Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2004). Decisions from experience and the effect of rare events in risky choice. Psychological Science, 15(8), 534–539. https://doi.org/10/b274n8

Hullman, J., Resnick, P., & Adar, E. (2015). Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering. PLOS ONE, 10(11), e0142444. https://doi.org/10/f3tvsd

Jaramillo, S., Horne, Z., & Goldwater, M. (2019). The impact of anecdotal information on medical decision-making [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/r5pmj

Johnson, C. R., & Sanderson, A. R. (2003). A Next Step: Visualizing Errors and Uncertainty. IEEE Computer Graphics and Applications, 23(5), 6–10. https://doi.org/10/df8kvd

Kale, A., Nguyen, F., Kay, M., & Hullman, J. (2019). Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data. IEEE Transactions on Visualization and Computer Graphics, 25(1), 892–902. https://doi.org/10/gghfzn

Kinkeldey, C., MacEachren, A. M., Riveiro, M., & Schiewe, J. (2017). Evaluating the effect of visually represented geodata uncertainty on decision-making: Systematic review, lessons learned, and recommendations. Cartography and Geographic Information Science, 44(1), 1–21. https://doi.org/10/f3m63m

Koller, T., Lovallo, D., & Williams, Z. (2017). Should assessing financial similarity be part of your corporate portfolio strategy? McKinsey on Finance, 64. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/should-assessing-financial-similarity-be-part-of-your-corporate-portfolio-strategy

Koller, T., Lovallo, D., & Williams, Z. (2012). Overcoming a bias against risk. McKinsey Quarterly, 4. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/overcoming-a-bias-against-risk

Kox, E. (2018). Evaluating the effectiveness of uncertainty visualizations: A user-centered approach [Masters thesis, University of Utrecht]. http://dspace.library.uu.nl/handle/1874/367380

Kurtz, K. J., Boukrina, O., & Gentner, D. (2013). Comparison promotes learning and transfer of relational categories. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(4), 1303. https://doi.org/10/gjvc63

Lapinski, A.-L. S. (2009). A Strategy for Uncertainty Visualization Design. Defence R&D Canada – Atlantic. https://apps.dtic.mil/sti/citations/ADA523694

Lipkus, I. M. (2007). Numeric, Verbal, and Visual Formats of Conveying Health Risks: Suggested Best Practices and Future Recommendations. Medical Decision Making, 27(5), 696–713. https://doi.org/10/b8p3gf

Lipkus, I. M., & Hollands, J. G. (1999). The Visual Communication of Risk. JNCI Monographs, 1999(25), 149–163. https://doi.org/10/gd589v

Lovallo, D., Clarke, C., & Camerer, C. (2012). Robust analogizing and the outside view: Two empirical tests of case-based decision making. Strategic Management Journal, 33(5), 496–512. https://doi.org/10/dnkh8m

Lovallo, D., Koller, T., Uhlaner, R., & Kahneman, D. (2020). Your Company Is Too Risk-Averse. Harvard Business Review, 98(2), 104–111.

Lovallo, D., & Sibony, O. (2014). Is your budget process stuck on last year’s numbers? McKinsey Quarterly. https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/is-your-budget-process-stuck-on-last-years-numbers

MacEachren, A. M. (1992). Visualizing Uncertain Information. Cartographic Perspectives, 13, 10–19. https://doi.org/10/gjscq9

Markman, A. B., & Gentner, D. (1993). Structural Alignment during Similarity Comparisons. Cognitive Psychology, 25(4), 431–467. https://doi.org/10/cqtx7q

Padilla, L. M., Creem-Regehr, S. H., Hegarty, M., & Stefanucci, J. K. (2018). Decision making with visualizations: A cognitive framework across disciplines. Cognitive Research: Principles and Implications, 3(1), 29. https://doi.org/10/ggrtng

Pang, A. T., Wittenbrink, C. M., & Lodha, S. K. (1997). Approaches to uncertainty visualization. The Visual Computer, 13(8), 370–390. https://doi.org/10/fdnbmw

Potter, K., Kirby, R. M., Xiu, D., & Johnson, C. R. (2012). Interactive visualization of probability and cumulative density functions. International Journal for Uncertainty Quantification, 2(4), 397–412. https://doi.org/10/ghhdw2

Ristovski, G., Preusser, T., Hahn, H. K., & Linsen, L. (2014). Uncertainty in medical visualization: Towards a taxonomy. Computers & Graphics, 39, 60–73. https://doi.org/10/f5v59d

Sarama, J., & Clements, D. H. (2009). Building Blocks and Cognitive Building Blocks: Playing to Know the World Mathematically. American Journal of Play, 1(3), 313–337. https://eric.ed.gov/?id=EJ1069014

Sibony, O., Lovallo, D., & Powell, T. C. (2017). Behavioral Strategy and the Strategic Decision Architecture of the Firm. California Management Review, 59(3), 5–21. https://doi.org/10/gcp2w3

Simon, H. A. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99. https://doi.org/10/dw3pfg

Smith, J. F., & Kida, T. (1991). Heuristics and biases: Expertise and task realism in auditing. Psychological Bulletin, 109(3), 472–489. https://doi.org/10/fwv6z6

Spiegelhalter, D., Pearson, M., & Short, I. (2011). Visualizing uncertainty about the future. Science, 333(6048), 1393–1400. https://doi.org/10.1126/science.1191181

Torsney-Weir, T., Sedlmair, M., & Möller, T. (2015, October). Decision making in uncertainty visualization. VDMU Workshop on Visualization for Decision Making under Uncertainty 2015. http://eprints.cs.univie.ac.at/4598/