A Retrospective on “Judgement in a Crisis”


In class today, we completed the Harvard Business simulation titled “Organizational Behavior Simulation: Judgment in a Crisis”. It had you put in the role of product manager at Matterhorn Health, a fictional company that just released the “GlucoGauge”, a self-administered device used to track glucose levels. The product launch had not gone well; while the target was a 10% tolerance in accuracy, data from the field suggests an up to 30% variance. The simulation is about your strategies in dealing with this issue, especially focusing on the way cognitive biases can affect our ‘logical’ decision making faculties. The program was especially built to emphasize four types of bias: confirmation, sunk cost, anchoring, and framing. First, confirmation: the simulation deliberately offered suggestions and data that pushed us in the direction of a false conclusion born of faulty early data. In my experience with it, I was given lots of information strongly suggesting that the microchip in the GlucoGauge was to blame, when in fact is was consumer misuse that was much more likely the culprit, and it was difficult for me to let go of the microchip hypothesis because of the bias. The simulation also touched on the sunk cost fallacy, but I found it less useful at demonstrating it. This was very much a user error on my part, but I wasn’t even aware I was allowed to not fully invest my expiring budget each cycle; I thought it was more about what we decided to emphasize in the distribution of the funds. Next, anchoring: this isn’t even a bias I had considered before, and I think it was really well demonstrated in the simulation. I was tasked with estimating the percentage of users who incorrectly used the GlucoGauge app, given only the circumstance and the very unscientific estimation given by a single doctor of 10%. The data from our team shown after the fact really displayed how the random estimation given by a single doctor dramatically affected our estimation. The program finally demonstrated framing, but for me it had a slightly different effect than intended. The email sent to me framed a potential job loss as a negative (instead of a chance to save jobs), so I felt much more hesitant to gamble with people’s livelihood and went for the non probabilistic four hundred job loss.

Just by knowing about these cognitive biases and how they can affect decision making subconsciously I feel better prepared to combat them in the future. I will try to actively combat them and check in with myself to really question how I am coming to the decisions that I am. I especially found the simulation’s demonstrations of framing and anchoring to be challenging. The question I deliberated on the longest (by a large margin) was on what strategy to proceed with on the microchip, either resulting in exactly four hundred lost jobs or a two thirds chance of six hundred. These are mathematically equivalent—either way you would expect to lose four hundred—but it simply felt wrong to gamble employee’s futures on the unpredictable luck of a solution, a sort of ‘devil you know’ scenario for me. The anchoring question was also really interesting: I really had to engage with the prompt to estimate the percentage, and I was weighed down (anchored, you could say) by the doctor’s 10% estimation. Without it I think I would have gone much higher, and I think I will look out for an anchoring bias especially in the future.

The built-in ambiguity of the simulation certainly made me feel less comfortable in my role and in my decisions. Obviously, in a real world company, the project manager would hopefully have years of experience and lots of technical know-how about the product, something I obviously lacked. However, this ambiguity almost helped me a little: I was very actively engaging with the content and the product, and tried to pick up as much as I could. I think I was very deliberate about my choices because I was so uncomfortable making them. This type of product crisis happens all the time with companies, and not just in biotech—one simple example that comes to mind in Samsung’s lithium ion batteries’ failures and subsequent recalls. Ultimately, these companies take a similar approach to the simulation: they try to be very communicative but avoid accepting blame for as long as possible and before they have confidently identified the problem. Overall, I really liked the design of the simulation, and I am excited to continue to participate in more!

Leave a Reply