The Tuxedo Fallacy

A researcher, dressed up in a tuxedo, stumbling through the undergrowth of your local jungle. As unfitting this seems, if you fallaciously jumble up your risks and uncertainties, you might be just as awkward.

26. März 2017 · Stefan Gugler

Illustration by Josy Fischer https://josyfischer.artstation.com/

The goal of risk management is to assess and prioritize risks. As we can see in the colloquial use of the term "risk", it pertains to a negative event that could occur with some probability. For example, there is no risk in winning the lottery (we call this opportunity), only in losing the lottery ticket. Already Frank Knight distinguished 1921 between two kinds of probabilistic assessments: risk and uncertainty [1]. These two types are essential for the understanding of the Tuxedo Fallacy as introduced by Sven Ove Hansson [2]. In risk, we have a specific number of clear cut outcomes that are all fraught with a fixed probability distribution. They are known or at least knowable. A bet on a fair coin toss is the usual, boring example. Uncertainty on the other hand is considerably different as it belongs to an unknown probability distribution Are there aliens? modelled by Drake’s Equation [3] would be a question of this sort. Furthermore, it is even possible that there are options we didn’t consider in the first place.

In the Tuxedo Fallacy, this dichotomy between risk and uncertainty gets blurred. There are notable tendencies in decision-supporting sciences to take only the risks into consideration without giving the uncertainties much attention. Hansson accuses probabilistic risk analysis of this behaviour. It would be epistemically analogous to comparing a jungle expedition (with uncertain consequences like homicidal organisms, known or unknown ones) to a night at the casino (wearing a Tuxedo, hence the name, having clear cut risks at hand) [2]. Numerical text-book examples with perfect dice and spherical cows[4] are not comparable with the much more uncertain, jungle-y real life situations. Also, the perceived probabilities of the stake holders might diverge significantly.

An example for the uncovered Tuxedo Fallacy is the Threshold of Toxicological Concerns (TTC) . The idea behind the TTC is to assess the risk of untested chemicals. Their structures are compared on a molecular level and subsequently classified: If an untested molecule is structurally similar to a known one, it is assumed that they exhibit similar effects. Potentially dangerous effects are thereby inferred, even though the effect and the structures are not necessarily causally related [5]. The TTC is looking at toxicity in a way that presupposes the structure-function relationship with classical inhibition mechanisms of proteins, i.e. it evaluates what it can evaluate. Possible novel or less studied mechanisms such as non-monotonic dose-response relationships [6] are disregarded. Even though we call it risk assessment, what we actually try to assess is Knightian uncertainty and we fail gloriously. We don’t avoid the Tuxedo fallacy, since we just assume clear-cut probabilities, whereas there are none.

A very similar observational bias is the streetlight effect, first illustrated by Abraham Kaplan in 1964 as "the principle of the drunkard’s search" [7]. Metaphorically, the drunkard will look for his lost key in the streetlight – not because he might have dropped the key there, but because there’s more light here. This might be a plausible explanation for the Tuxedo Fallacy. Putting a number on a situation is easy. Assessing a qualitative situation and using the precautionary approach is tedious, more difficult and less satisfactory. Mathematics is a handy tool for numbers, that’s why humans tend and try to numberfy (i.e. quantify) any uncertainty, thereby only seemingly bereave it of its inherent uncertain nature.

However, what is unquantifiable is not necessarily irrelevant for the decision-making process. But hey, don’t get me wrong. I love numbers and I think the (nota bene “unreasonable” [8]) effectiveness of mathematics leads us to believe that almost anything physical is quantifiable. We just need to be concerned about jumping to conclusions or to overly narrow confidence intervals. Only careful and bias-free analysis by the researcher allows for approximations and avoidance of, inter alia, the Tuxedo Fallacy.

 

References

[1] F. Knight, Risk, uncertainty and profit, Boughton Mufflin Company, 1921.

[2] S. Hansson, “From the casino to the jungle”, Synthese 2009, 168, 423–432.

[3] F. Drake, The Drake Equation Revisited, 2003.

[4] D. Kaiser, The Sacred, Spherical Cows of Physics, 2014.

[5] K. Bschir, “Risk, Uncertainty and Precaution in Science. The Case of the Threshold of Toxicological Concern Approach in Food Toxicology”, 2016.

[6] L. N. Vandenberg, T. Colborn, T. B. Hayes, J. J. Heindel, J. David R. Jacobs, D.-H. Lee, T. Shioda, A. M. Soto, F. S. vom Saal, W. V. Welshons, R. T. Zoeller, J. P. Myers, “Hormones and Endocrine-Disrupting Chemicals: Low-Dose Effects and Nonmonotonic Dose Responses”, Endocrine Reviews 2012, 33, PMID: 22419778, 378–455.

[7] A. Kaplan, The Conduct of Inquiry: Methodology for Behavioural Science, 1964, p. 11.

[8] E. P. Wigner, “The unreasonable effectiveness of mathematics in the natural sciences.”, Communications on Pure and Applied Mathematics 1960, 13, 1–14.

Autor*Innen

Stefan Gugler (24) studiert Interdisziplinäre Naturwissenschaften an der ETH Zürich mit der Master-Vertiefung Computational and Physical Chemistry. Bei reatch ist er verantwortlich für den Ideen-Inkubator FakTisch.

Disclaimer

Der vorliegende Blogeintrag gibt die persönliche Meinung der Autoren wieder und entspricht nicht zwingend derjenigen von reatch oder seiner Mitglieder.