One classic (liability-driven) portfolio strategy, known for obvious reasons as the “barbell,” entails a lot of very defensive low-beta assets on the one side, and a lot of aggressive high-beta assets on the other. Practitioners follow the advice of Mr. Bing Crosby: they don’t “mess with Mr. In-between.”
For the most part, this is a practitioners’ strategy, not a theorists’ strategy. There’s been little reason in theory to think that it should work. After all. if you want diversification, why not include some assets in between those extremes? And if you don’t care for diversification, why not go whole hog with a “bullet” strategy?
Despite its lack of conceptual foundations, practitioners continue to use it.
Theory in Pursuit of Practice
Donald Geman, a Fellow at the Institute of Mathematical Statistics and a professor of Applied Mathematics at Johns Hopkins, with expertise in machine learning, joins with two other scholars in writing a paper, now a preprint at arXiv, which seeks to put a foundation under this practice.
The other authors are: Hélyette Geman of the University of London and… Nassim Nicholas Taleb, of black swans and anti-fragility renown.
The gist of the paper, expressed non-mathematically, is that managers work with the facts they know. What they know is that they have to constrain the tails of their portfolio-return bell curve to satisfy various regulatory or institutional demands, Value at Risk, Conditional Value at Risk, and stress testing.
The “operators,” as the authors call the decision makers in portfolio management, aren’t “concerned with portfolio variations” except insofar as they have “a vague notion of association and hedges.” They set out on the one hand to limit the maximum drawdown with investments at the conservative side of the scale in response to the sort of pressures and mandates just listed, then they move to the other end of the scale to seek to get the upside benefits of the same market uncertainties against which they’ve just protected themselves.
In the course of making these points, the authors get in the by-now customary jabs at Modern Portfolio Theory. One footnote for example explains that MPT’s aim of lowering variance, thus its habit of treating the left-hand tail and the right-hand tail as equally undesirable, is rational only if there is certainty about the future mean return, or if “the investor can only invest in variables having a symmetric probability distribution.” And the authors consider neither premise plausible. The latter they find especially “farfetched.”
From MDH to Entropy
To get a bit more technical, their discussion elaborates on an existing literature on the “mixture” of two or more normals, the “mixture of distributions hypothesis.” It has been part of the finance literature for at least twenty years, sinc e Matthew Richardson and Thomas Smith wrote a paper of the “daily flow of information” for the Journal of Financial and Quantitative Analysis in 1994.
The underlying idea of the MDH is that information is moving into markets at uneven rates, and that this unevenness renders asymmetric distribution curves inevitable.
In 2002, Damiano Brigo and Fabio Mercurio used MDH to calibrate the skew in equity options.
What Geman et al. add is a model that makes “estimates and predictions under the most unpredictable circumstances consistent with the constraints.”
They also, somewhat confusingly, call this a “maximum entropy” model. Entropy of course is a concept taken from the physical sciences, and the maximum entropic state for any system is one in which all useful energy has been converted into heat. Not a good thing. The idea has long been adopted into information theory, re-conceiving useful energy as signal and heat as noise. Thus, unsurprisingly, early efforts to introduce entropy into finance have seen entropy as something to be minimized.
The Question in Unanswered
Indeed, Geman et al are aware that their invocation of “maximum entropy” will seem an odd innovation to many of their readers. Most papers that have invoked entropy “in the mathematical finance literature have used minimization … as an optimization criterion” they say.
Their use of a “maximum entropy” model (not as a “utility criterion” of course but as a way of recognizing “the uncertainty of asset distributions”) is itself not entirely novel though. They seem to have imported it from the world of developmental economics. In 2002 Channing Arndt, of the UN’s World Institute for Development Economics Research, witrh two associates, published an article announcing a “maximum entropy approach” to modeling general equilibrium in developing economies, illustrating it with specific reference to Mozambique.
Geman at al deserve some credit for their syncretism, their willingness to look in a variety of different places for the solution to the puzzle they’ve set themselves. Still, it seems to this layperson expert-on-none-of-it that the resulting construction is a ramshackle hut rather than a model. The simple question of why barbells work remains (so far as I can tell) unanswered.