Why yours truly is not a Bayesian
21 Aug, 2025 at 12:58 | Posted in Statistics & Econometrics | Leave a comment
Imagine you are a Bayesian turkey. You hold a nonzero prior belief in the hypothesis (H): People are nice vegetarians who would never eat a turkey. Every day I see the sun rise is further confirmation of this fact.
Each day you survive and are not eaten constitutes new evidence (e). You dutifully update your belief according to Bayes’ Rule:
P(H|e) = [P(e|H) * P(H)] / P(e)
Since your survival is guaranteed if your hypothesis is true, the likelihood P(e|H) = 1. As you survive day after day, this constant stream of confirming evidence causes your posterior belief P(H|e) to grow larger than your prior P(H). From a purely Bayesian perspective, this is perfectly rational: your increasing confidence is a mathematically correct interpretation of the evidence under your model.
However, this rationalist view contains a fatal flaw. The problem lies in the denominator, P(e) — the probability of surviving each day. This probability is high not only because of your benevolent hypothesis H but also because of a cruel alternative hypothesis (H’): The people are fattening you up for Christmas.
Under H’, the likelihood of survival P(e|H’) is also very high — but only until a specific, fatal date. The ever-increasing posterior probability you assign to H blinds you to the fact that every day that passes also brings you closer to the moment when H’ reveals its terrifying truth. You are meticulously updating your beliefs within a model that fails to account for the true, non-stationary nature of your environment.
As Bertrand Russell pointed out with this parable, the turkey’s inductive reasoning is flawless, but its premises are fatal. It demonstrates the profound difference between mathematical rationality — processing information correctly given a set of assumptions — and epistemic rationality —ensuring your set of hypotheses actually reflects the possible structures of reality. Sometimes, the most rational belief is to maintain a small but crucial doubt in your most cherished model, for the world outside your cage may operate on a calendar you have not considered.
Mainstream economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules — that is, they maximise expected utility with respect to some subjective probability measure that is continually updated according to Bayes’ theorem. If not, they are supposed to be irrational.

The nodal point here is — of course — that although Bayes’ Rule is mathematically unquestionable, that doesn’t qualify it as indisputably applicable to scientific questions. As one of my favourite statisticians — Andrew Gelman — puts it:
The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience, who realizes that different methods work well in different settings … The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference: the idea that prior and posterior distributions represent subjective states of knowledge …
Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Subjective prior distributions don’t transfer well from person to person, and there’s no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it’s not). Where do prior distributions come from, anyway? I don’t trust them and I see no reason to recommend that other people do, just so that I can have the warm feeling of philosophical coherence.

