Reaching for the wrong conclusion.jpg

During World War II, the Allies lost tens of thousands of planes on bombing expeditions. Those planes returning would arrive back with bullet holes in the wings, body and tail area, but not in the cockpit or engine block. The engineers concluded that they should add extra armour plating to the areas that were hit, to increase the likelihood of a plane returning.

However, they were reaching a conclusion without key information. They were only examining the planes that returned: the shot down planes were not included in their analysis. The bullet holes, in fact, identified where the planes protection was good enough since these aircraft were able to fly back when hit in these areas.

So adding extra armour would take time and money and lower performance efficiency. But not solve the problem of planes being shot down.

Fortunately a mathematician, Abraham Wald, pointed out the fallacy of the engineer’s conclusion. He created a model which showed how much damage each part of the plane could take before it failed and how likely it was to hit each part on a bombing run. Based on Wald’s model, the most vulnerable areas were addressed and given extra protection. Thousands of lives were saved. 

Does Wald's approach sound familiar? Impact and likelihood, the two axes used by Wald, are same two axes we use at Acutest in our approach to risk assessment in planning, QA and testing. We have seen many programmes where conclusions are drawn from an incomplete set of data. And we have seen plenty of programmes where a layer of metaphorical extra armour had been added in the wrong places with the inevitably increase in time and cost and a poorer performance. If this sounds like the programme you are working on and you'd like to know more about how our approach could make a difference, please contact us and we'd be happy to help.

Contact acutest