Ergodicity — an introduction
30 Nov, 2025 at 17:17 | Posted in Economics | Leave a comment
Ergodicity often hides behind a veil of mathematical complexity, yet at its core, it offers a profoundly simple and insightful lens through which to view and understand probability and time. At its heart, it challenges us to distinguish between two distinct types of averages: the ensemble average and the time average.
To grasp this distinction, let us look at a simple, everyday example: to determine which is the most popular shop in a city. How would we proceed? We are faced with two distinct methods.
The first method is the ensemble average. We freeze time for a single moment, and we calculate a statistical average across a large population. We count the number of people in the bakery, the coffee shop, and the hardware store, and divide by the total population. We find that the ensemble average for being in the bakery is, say, 10%. This conclusion is drawn from a cross-section of the population at a single point in time. It is a photograph of the collective.
The second method is the time average. Here, we shift our focus from the breadth of the crowd to the depth of a single life. We choose one person, yours truly, and we follow him for a year. We record the total time he spends in each shop and divide it by the total time of the study. We discover that his time average for being in the bakery is only 0.5%, while his time average for his local grocery store is much higher. This conclusion is a longitudinal study of a single trajectory.
When we compare these two averages, we arrive at the critical juncture. The ensemble average tells us the bakery is a crowd-puller. The time average tells us the grocery store is the cornerstone of yours truly’s routine. The two calculations yield different results, which is typical of a non-ergodic system.
A system is ergodic if, and only if, the time average equals the ensemble average for a given observable. In such a world, the cross-sectional snapshot would perfectly mirror the long-term experience of any one individual within it. Yours truly’s annual time averages would align precisely with the city’s instantaneous ensemble averages. But the real world is not so uniform or predictable. It is full of variation, shaped by diverse preferences and random events. The two measures diverge.
The implications of this insight are profound. It forces us to scrutinise the statistics we use to understand our lives. When we hear about an ‘average’ outcome, we must ask: is this an ensemble average or a time average? In finance, for example, a risky investment might have a positive ensemble average return (the expected value across all possible investors looks good), but could lead to total ruin for a single investor who follows that path over time (yours truly has elaborated on this here), resulting in a negative time average of their wealth growth. The ensemble looks promising. The individual’s trajectory is catastrophic.
Ultimately, understanding ergodicity is about understanding the critical difference between these two averages. It emphasises the ontological reality that an individual life constitutes a singular, continuous time average unfolding within a shifting landscape of ensemble averages. Recognising this distinction enables us to formulate more precise questions, critically interrogate aggregate data, and approach the probabilistic complexity of life with greater analytical acuity.
Over the years, some of us have tried to teach our economics students something of the importance of questioning the common ergodicity assumption. This assumption, often left unstated, underpins most mainstream economic models concerning preferences and expected utility.
One of the problems has been the lack of an accessible textbook on ergodicity economics. However, this has now been remedied!
Ole Peters’ and Alexander Adamou’s newly published An Introduction to Ergodicity Economics provides an excellent introduction to a rapidly expanding field of research within economics.
Paul Samuelson once famously claimed that the ‘ergodic hypothesis’ is essential for advancing economics from the realm of history to the realm of science. But is it really tenable to assume — as Samuelson and most other mainstream economists — that ergodicity is essential to economics?
Sometimes ergodicity is mistaken for stationarity. But although all ergodic processes are stationary, they are not equivalent.
Let’s say we have a stationary process. That does not guarantee that it is also ergodic. The long-run time average of a single output function of the stationary process may not converge to the expectation of the corresponding variables — and so the long-run time average may not equal the probabilistic (expectational) average.
Say we have two coins, where coin A has a probability of 1/2 of coming up heads, and coin B has a probability of 1/4 of coming up heads. We pick either of these coins with a probability of 1/2 and then toss the chosen coin over and over again. Now let H1, H2, … be either one or zero as the coin comes up heads or tales. This process is obviously stationary, but the time averages — [H1 + … + Hn]/n — converges to 1/2 if coin A is chosen, and 1/4 if coin B is chosen. Both these time averages have a probability of 1/2 and so their expected average is 1/2 x 1/2 + 1/2 x 1/4 = 3/8, which obviously is not equal to 1/2 or 1/4. The time averages depend on which coin you happen to choose, while the probabilistic (expectational) average is calculated for the whole “system” consisting of both coin A and coin B.
Instead of arbitrarily assuming that people have a certain type of utility function — as in mainstream theory — time average considerations show that we can obtain a less arbitrary and more accurate picture of real people’s decisions and actions by basically assuming that time is irreversible. When our assets are gone, they are gone. The fact that in a parallel universe, it could conceivably have been refilled, is of little comfort to those who live in the one and only possible world that we call the real world.
Time average considerations show that because we cannot go back in time, we should not take excessive risks. High leverage increases the risk of bankruptcy. This should also be a warning for the financial world, where the constant quest for greater and greater leverage — and risks — creates extensive and recurrent systemic crises.
Suppose I want to play a game. Let’s say we are tossing a coin. If heads come up, I win a dollar, and if tails come up, I lose a dollar. Suppose further that I believe I know that the coin is asymmetrical and that the probability of getting heads (p) is greater than 50% – say 60% (0.6) – while the bookmaker assumes that the coin is totally symmetric. How much of my bankroll (T) should I optimally invest in this game?
A strict mainstream utility-maximising economist would suggest that my goal should be to maximise the expected value of my bankroll (wealth), and according to this view, I ought to bet my entire bankroll.
Does that sound rational? Most people would answer ‘no’. The risk of losing is so high that after just a few games — the expected time until the first loss is 1/(1-p), which in this case is 2.5 — one would, in all likelihood, have lost and gone bankrupt. The expected-value maximising economist does not seem to have a particularly compelling approach.
While I share many of the critiques that Peters and Adamou level against mainstream expected utility theory, I diverge from their conclusion regarding human decision-making. In their book, Peters and Adamou write:
We will do decision theory by using mathematical models … The wealth process and the decision criterion may or may not remind you of the real world. We will not worry too much about the accuracy of these reminiscences. Instead we will ‘shut up and calculate’ — we will let the mathematical model create its world … Importantly, we need not resort to psychology to generate a host of behaviours, such as preference reversal as time passes or impatience of poorer individuals … Unlike utility functions, treated in mainstream decision theory as encoding psychological risk preferences, wealth dynamics are something about which we can reason mechanistically …
Contrary to their position, I contend that psychological factors are not merely incidental but are fundamental. Any framework that seeks to describe or predict human action must place them at its core.
When evaluating decisions, the way we measure ‘growth’ changes the story dramatically. Consider two very different processes. In the first (Gamble 1), an investor begins with $10,000 and passes through three rounds of wealth reduction, ending with just half a cent. In the second (Gamble 2), the investor faces a single gamble: a 99.9% chance of walking away with $10,000,000 and a 0.1% chance of ending with nothing.
In Gamble 1, the deterministic shrinking process is straightforward: each round reduces the investor’s wealth by a constant proportion. Mathematically, the wealth after three rounds is $0.005.The investor loses about 99% of wealth per round. The average per-round growth rate is about −99%. The result is a guaranteed catastrophe.
In Gamble 2, the investor risks everything on a single binary outcome — a 99.9% chance of $10,000,000, and a 0.1% chance of nothing. The expected value is huge — on average, the gamble turns $10,000 into $9,990,000. However, if the gamble were repeated many times with the entire bankroll at stake, ruin would be inevitable. Since there is always some probability of hitting zero, the long-run geometric growth rate is negative infinity. Once the investor reaches zero wealth, no recovery is possible.
Which gamble is regarded as superior depends on the objective. If the goal is maximising expected wealth from a one-off decision, Gamble 2 dominates, offering huge expected gains. But if the goal is preserving wealth over repeated plays, Gamble 2 is disastrous. Gamble 1 is equally unappealing — it guarantees destruction without the possibility of recovery.
The metric we use — arithmetic or geometric averages and growth rates — can give entirely different conclusions. From an expected value perspective, one would favour Gamble 1, since it offers a higher average growth rate. Yet I suspect very few investors would actually share that preference.
When it comes to human decision-making, psychological factors are paramount. This is especially true when confronting uncertainty. The models and examples presented often operate, either explicitly or implicitly, within the realm of quantifiable risk. On this point, it is wise to recall the crucial distinction made by Keynes a century ago: measurable risk is fundamentally different from unmeasurable uncertainty. In the latter domain, where probabilistic calculations break down, psychology inevitably plays a decisive role.
Consequently, while ‘optimal growth rates’ may serve as a useful decision criterion in specific, well-defined contexts, they cannot be considered the sole or universally best guide for human action. A comprehensive theory of decision-making must account for the full spectrum of human cognition and attitude, particularly when navigating the unquantifiable unknowns of the real world.

