On the morning of 16 September 1992, sterling was pinned to its lower bound inside the European Exchange Rate Mechanism. The framework had been clear for nearly two years. Sterling would hold within a band against the Deutsche Mark. The Bank of England would defend the band. Domestic monetary policy would be subordinated to it. In return, Britain would import the Bundesbank’s anti-inflationary credibility.
By that morning, every part of the framework was being tested simultaneously. The Bundesbank had raised rates to absorb the fiscal shock of German reunification. Britain was in recession, with rising unemployment and falling property prices. Domestic conditions called for lower rates. The framework called for keeping UK rates high enough to defend the parity. The market noticed the gap.
At around 11:00 the government announced an increase in the base rate from 10 to 12 per cent. Sterling continued to fall. An emergency announcement promised a further increase to 15 per cent the following morning. Sterling continued to fall. By 19:40, the Chancellor of the Exchequer announced that the United Kingdom was suspending its membership of the ERM. The 15 per cent rate would never take effect. Treasury papers declassified in 2005 put the estimated loss at around £3.3 billion, measured at February 1994.
The framework had been internally coherent. It bought one desirable property — anti-inflationary credibility — at the price of another, monetary autonomy. As long as those two were not in conflict, the price was unobservable. The day they came into conflict, it was paid in a single afternoon.
What kind of problem is this?
The ERM was not a bad framework that turned out to be wrong. It was a framework with a particular structure: a set of desirable properties that could not all be jointly held. Free capital movement, a fixed exchange rate, and independent monetary policy form what Robert Mundell formalised in 1963 as the impossible trinity. Any two are achievable; all three are not. Britain by 1992 had committed to the first two and was discovering that the third was not available.
When a system of desirable properties cannot all be held simultaneously, when committing to one set forces a choice about which to give up, when the choice itself is the cost of coherence — that is a specific shape of problem. And the discipline that has thought most rigorously about systems of properties that cannot all be satisfied is the foundations of quantum mechanics.
So let’s borrow.
This is the multi-model move: recognise the shape of a problem, find the discipline that has thought rigorously about that shape, and import its frameworks deliberately rather than reinventing them from scratch.
The foundations of quantum mechanics work with entangled particles and photon pairs, physical systems whose correlations can be measured in the laboratory. Markets are messier: there is no equivalent apparatus, participants adapt, and committing to a framework shapes the portfolio. But the structural logic transfers. When a set of desirable properties is mutually inconsistent, every coherent position must surrender one of them, and the choice of which to surrender is the price of taking a position.
Three Truths, At Most Two
In 1964, the Northern Irish physicist John Stewart Bell published “On the Einstein Podolsky Rosen Paradox”. It addressed whether quantum mechanics is a complete description of reality, or whether hidden variables would restore a more classical picture in which particles have definite properties at all times.
Bell took three assumptions that any sensible classical theory ought to satisfy. The first was locality: nothing influences anything else faster than light. The second was definite outcomes: measurements yield single, classical values, and those values reflect properties the system possessed prior to being measured. The third was measurement independence: experimenters can choose what to measure freely, without their choices being correlated with the hidden state of the system.
From these assumptions, Bell derived an inequality that set a ceiling on how strongly distant measurements of entangled particles could correlate. Quantum mechanics predicts that this ceiling can be exceeded. The two predictions disagree by an amount testable in the laboratory.
In 1982, Alain Aspect’s group in Paris ran a landmark version of the experiment. Quantum mechanics won. In 2015, three independent groups closed the main experimental loopholes. Quantum mechanics won again. In 2022, the Nobel Prize in Physics was awarded to Aspect, John Clauser, and Anton Zeilinger for this work.
The implication is uncomfortable. Those three assumptions cannot all be true together. Locality, definite outcomes, or measurement independence: one must go. The empirical result establishes that. It does not tell us which.
Four Ways To Pay
The leading responses to Bell’s theorem either reinterpret the standard quantum formalism, as Many Worlds, Bohmian mechanics, and Copenhagen-type views do, or modify its assumptions, as superdeterminism does. What separates them is which of Bell’s assumptions each is prepared to surrender.
Many Worlds, originated by Hugh Everett in 1957 and developed by Bryce DeWitt and David Deutsch, keeps dynamical locality, measurement independence, and the realism of the underlying wavefunction. The price is the single-outcome component of definite outcomes: there is no unique result, but a branching structure in which every quantum-allowed outcome is realised in some branch.
The pilot-wave or Bohmian interpretation, developed by Louis de Broglie and David Bohm, keeps definite outcomes — particles have positions at all times — and measurement independence. The price is locality at the hidden-variable level: the guiding wavefunction ties distant particles together, though not in a way that permits faster-than-light signalling. Lee Smolin notes a further asymmetry: the wave influences the particle but the particle does not influence the wave. That asymmetry is part of the price.
The Copenhagen interpretation and its modern descendants keep operational locality and measurement independence. The price is the other component of definite outcomes: classical values cannot be assigned to observables outside a measurement context. The wavefunction describes potentialities; values come into being with the act of measurement. The result is a strange relationship between observer and observed that has bothered physicists for a century.
Superdeterminism, advocated by Sabine Hossenfelder and others, keeps locality and definite outcomes by giving up measurement independence. Experimenters’ choices are correlated with the hidden state of the system being measured. Critics argue that this dissolves the basis of experimental reasoning; proponents argue that the price, properly understood, is bearable.
Each of these is a coherent response to Bell’s constraint. The first three are usually presented as empirically equivalent interpretations of the standard quantum formalism; superdeterminism is more radical, modifying assumptions rather than reinterpreting them. Bell’s theorem proves that something must be surrendered. The position ruled out is the one that tries to keep all the incompatible commitments at once.
What Every Framework Surrenders
Investing under uncertainty has an analogous structural shape, even though the mathematics is different. The desirable properties that frameworks for capital allocation try to hold are abundant: high expected return, low volatility, resilience to drawdown, liquidity, regime independence, low cost, robustness to behavioural error. No portfolio holds all of them at once. The framework is a choice about which to surrender.
Naïve mean-variance optimisation gives up robustness to input error and regime change. The framework assumes returns are stationary and covariances estimable from the past. When the distribution that generated yesterday’s data is not the distribution generating tomorrow’s, the optimisation computes the wrong answer with great precision.
Risk parity gives up resilience to a world in which its diversifiers stop diversifying. The framework equalises risk contributions across asset classes by levering the lower-volatility ones, premised on diversifying behaviour that held through the 2010s. In 2022, when bonds and equities fell in concert, the leverage that had been the framework’s elegance became its liability.
Trend-following gives up the turning point. The framework profits when prices continue in their current direction and loses when they reverse. The price is paid at every reversal, when the strategy is committed to a direction the market has just abandoned.
Concentrated value gives up the resilience that comes from diversification. The framework trusts that conviction in a small number of positions outweighs the protection of being wrong about any one. When the theses fail together, the concentration carries the correlation it appeared to avoid.
Volatility selling gives up convexity. The framework collects steady option premium against the assumption that realised volatility will fall short of implied. When the assumption holds, the income is reliable; when it fails, the losses can arrive in concentrated bursts that outweigh years of accumulated premium.
Carry strategies give up resilience in risk-off regimes. The framework borrows cheaply in one currency or maturity and lends at higher yields in another, harvesting the spread. The unwind, when it arrives, can compress months of accumulated carry into days of forced selling.
Every framework names what it surrenders, even if adherents find it convenient not to look sometimes. The most expensive framework in any room is the one whose proponents claim it costs nothing. Frameworks whose costs are honestly identified can be hedged, complemented, or held with humility. Frameworks whose costs are denied compound until the world demands payment, as the ERM compounded from October 1990 until September 1992, and as LTCM’s framework compounded for four years until a New York Fed boardroom filled, in September 1998, with representatives of the institutions called upon to settle the cost.
Knowing What You Owe
When physicists weigh these interpretations against one another, the experiments cannot adjudicate among them. The debate turns on what each interpretation keeps, what it surrenders, and whether the price is acceptable. They are not free to escape Bell’s price. They are free only to choose how to pay it.
The same discipline is available to those who allocate capital. No framework escapes the price. The unknowns are too dense, the data too sparse, the regimes too non-stationary for any single framework to be both complete and free. The framework is its trade-off, not its content.
This lesson extends beyond finance. In engineering, every design surrenders something: weight reduction can cost fatigue resistance, redundancy costs efficiency. In clinical medicine, every treatment surrenders something: aggressive intervention costs tolerability, sensitive screening costs specificity. The professional who claims a design or regimen with no downside has not escaped the trade-off; they have hidden it.
What follows is not relativism. Some frameworks are better than others for a given problem. Bell’s theorem itself rules out positions that try to keep everything. But within the space of internally coherent positions, the choice is between which truth you find least painful to give up. Pretending no truth is being given up is the only position provably wrong.
The discipline is in asking: what is the price my framework is paying, and am I paying it knowingly?
References & Further Reading
Speakable and Unspeakable in Quantum Mechanics: John Stewart Bell
Einstein’s Unfinished Revolution: Lee Smolin
The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics: George Greenstein and Arthur Zajonc
International Economics: Robert Mundell
Manias, Panics, and Crashes: A History of Financial Crises: Charles P. Kindleberger and Robert Z. Aliber


