Monthly Archives: June 2015

Financial Acceleration and the Paradox of Flexibility

Macroeconomic models are rigorous, but simple. While their simplicity may lend easier interpretation to otherwise opaque economic phenomena, they often fail to capture key microfoundational realities of the economy which render it different from what the models would ascertain. And for some time, one of the features most crucially lacking from these models was any incorporation of the real-world frictions posed by financial intermediation.

Toward this end, a “financial accelerator” mechanism was adapted into the canon of modern macroeconomics, most notably by economists Ben Bernanke, Mark Gertler, and Simon Gilchrist in 1996. The financial accelerator is meant to make sense out of the recurrent observation that relatively large shocks to the economy seem to originate from proportionally smaller perturbations in the financial markets—hence the need for some sort of acceleration process which magnifies financial fluctuations into broader macroeconomic volatility.

The proposed financial accelerator model works like this: First, there is a change in asset values. This change affects borrowers’ cost of external finance, because in a world of asymmetric information, lenders are risking their money to default if loans are not fully collateralized, so a risk premium must be charged to the extent that collateral is not available. So when there is a decline in the value of assets available for collateral, it raises the cost of external financing, which depresses spending in the economy, which imparts further decline in asset values, and so on in an adverse spiral.

This concept, although not formalized until fairly recently, was not entirely unfathomed by earlier thinkers. In 1933, economist Irving Fisher articulated the related concept of “debt deflation” in order to explain the severity of the Great Depression. His idea was that in a state of over-indebtedness, distress selling leads to falling asset prices, which up against nominally-fixed debt obligations will cause declining net worth, leading to a contraction of spending as borrowing is constrained and saving is attempted to repair balance sheets, and profits fall given that price deflation presses against debt service costs, resulting in a precipitous decline in output in a vicious cycle that only makes itself worse until enough debtors default that the economy can start to recover.

To be clear, the argument is not that any increased (decreased) propensity to save (borrow) is inherently recessionary or prone to trigger such a spiral. Interest rates are the key price mechanism that can adjust up or down to equilibrate saving and borrowing; it is when interest rates are not allowed to fall enough, or they are at the zero lower bound (ZLB) and can go no lower, that an indebted, deflationary spiral can produce a recession in output. And it is this ZLB—or otherwise binding floor on interest rates—situation which has inspired a more recent assertion about the nature of aggregate demand in such a predicament.

Recall that in a New Keynesian economic framework, the aggregate demand (AD) curve represents the quantity of real output demanded at each price level. Likewise, the aggregate supply (AS) curve represents the quantity of real output supplied at each price level.

090813krugman1-blog480

Unlike micro-level demand curves, which slope downward mainly due to micro-level substitution effects (but also income effects), the AD curve is conventionally downward sloping for three reasons: (1) the real balances effect, which is the wealth effect in cash—i.e. people spend more when they have more, as well as the inverse, and the price level determines the real value of this cash; (2) the interest rate effect, which is that a higher price level reduces the real value of savings, which raises interest rates and depresses spending (less borrowing, more saving), or the inverse; (3) the exchange rate effect, which is how a higher price level makes domestic goods more expensive and foreign goods cheaper, thus reducing exports and raising imports to decrease GDP (or the inverse).

In reality, it is the second effect—the interest rate effect—which is the most quantitatively significant for an economy like the US. And it is when interest rates get stuck at zero that New Keynesian economists Paul Krugman and Gauti Eggertsson argue that the AD curve becomes upward sloping due to the practical inability of nominal interest rates to be lowered when they are at zero, and the increasing burden of nominally-fixed debt as price deflation ensues. This is most likely the case, Krugman argues, given that the real balances effect on spending (from increased value of cash via deflation) is not very big compared to the effect of real interest rates from deflation at the ZLB.

090813krugman2-blog480

Note that this upward sloping AD curve would not apply when recovery inflation is expected to compensate for the below-trend deflation, for that would have the effect of lowering expected real interest rates. But when lower inflation or deflation manifests as a permanent reduction in the price level, interest rates cannot go low enough to circumvent the debt deflation problem. Since the high real interest rates (i.e. lower prices in the future) encourage creditors to save the payments they receive, a spiral of self-fulfilling deflationary hoarding is the result, leading to a deep and prolonged recessionary slump.

This is the thrust of Krugman and Eggertsson’s 2010 paper titled “Debt, Deleveraging, and the Liquidity Trap.” The putatively backward-bending AD curve in the floor-bound interest rate environment leads to some counterintuitive conclusions about supply-side phenomena. This includes the “paradox of flexibility,” which suggests that removing downward nominal rigidity would actually make the recession worse because it would raise real interest rates and the real value of debt. (You thought Keynesians were all about sticky wages? Not so fast!) It also includes the “paradox of toil,” which suggests that an increased propensity to work (represented by an outward shift of AS) would actually lead to less employment for basically the same reason.

Broadly interpreted, the unconventional, upward sloping AD curve implies that positive supply shocks reduce output and negative supply shocks raise output. Both of these paradoxes are supplements to the already well-known “paradox of thrift,” which suggests that attempts to save in a ZLB environment (or alternatively, a “liquidity trap“) depress income and actually lead to less saving in the aggregate.

Krugman and Eggertsson’s model has led them to suggest that wage cuts, oil shocks, and the like may not have the effect that they are usually believed to have in liquidity trap conditions. This is a modern formulation—one that Keynes did not apparently believe when he insisted that FDR’s fixing of prices and wages under NIRA “probably impedes Recovery.” (However, Keynes had also argued that aggregate wage cuts would not help.) The backward AD curve would imply that adverse restrictions on supply—downward nominal wage rigidity, higher commodity prices, etc.—are abnormally promotional of output under these special economic circumstances.

So, are Krugman and Eggertsson right? Is aggregate demand upward sloping under debt deflation and floor-bound interest rates, with all its implications for the supply side? I am doubtful. This may be one of those cases, as in the 1970s with inflation expectations, in which a Keynesian model comes up short for an inadequate account of microfoundations. Several factors come to mind that contravene their conclusion:

1) Wage cuts could serve as an alternative to unemployment. Obviously this could do more to preserve output on the micro level, but on the macro level? Assuming firms’ labor demand schedules are sufficiently elastic, it could also help preserve output on the macro level insofar as it improves total labor income through less unemployment, which would promote consumer spending.

2) Debt is burdensome in proportion to income, not prices. And if wage flexibility promotes income, it would effectively ease the aggregate debt burden. Sure, there may be fewer defaults to discharge debt with shared cuts instead of acute unemployment, but to the extent that households are risk-averse, they would feel less pressure to deleverage, and lenders would have less need to assign a higher risk premium associated with potential default if wage cuts mitigate unemployment.

(In addition to high oil prices and interest rates, an unemployment shock could help explain that “Minsky moment” which effectively lowered the ceiling on acceptable leverage.)

3) Krugman and Eggertsson do not effectively model investment or its returns. Wage flexibility could raise the marginal efficiency of capital, to the extent that a higher payroll keeps capital producing output, and wage flexibility improves employers’ (~capital owners’) profit margins on output. Because asset prices are derived from discounted returns to capital, this could mitigate asset price deflation and actually dampen the financial accelerator mechanism.

Of course, it’s possible that the effect of higher real interest rates from deflation amidst debt overhang could be greater than the effects I suggest. It could be the case that firm-level labor demand is not elastic enough in partial equilibrium, that household balance sheet behavior is not that risk-averse, that profit margins are not much improved by flexibility, and higher real interest rates are the bigger factor. But the a priori reasoning here only suggests ambiguity, and the positive economic reality can only be ascertained through empirical evidence. We do not know that AD is upward sloping just because interest rates are stuck, real balance effects are small, and debt burdens abound.

What would answer the question is evidence on the impact of supply-side phenomena during periods when interest rates are floor-bound (i.e. supply shocks in a liquidity trap). To that end, here is the historical data showing inverted real wages and output during the Great Depression:

Screen-Shot-2013-01-30-at-8_19_49-AM

As Scott Sumner explains, a huge part of the variation in output during the Great Depression is explained by real wages. When wages were hiked, economic growth stalled—especially in 1937-38 after the Wagner Act had granted significant power for unions to coerce wage hikes. Assuming the Great Depression qualifies as having liquidity trap conditions that could alter the slope of AD (which some may dispute, but Krugman would think so), this is powerful evidence against the view that negative supply shocks are expansionary, and so maybe AD does not readily become upward sloping after all.

So until Krugman, Eggertsson, or anyone can provide a survey of historical evidence to suggest that their upward sloping AD model comes true in practice, it should be safe to assume it does not. And in the meantime, positive supply shocks and wage/price flexibility should be welcomed. It’s true that downward rigidity is, to a great extent, an inexorable reality of the market; moreover, price stickiness in moderation may have some secular benefits. Notwithstanding that, price flexibility is a lubricant necessary to make markets function well, and public policy should not adversely encourage more rigidity through suboptimal safety net and minimum wage policies, and the like.

But one thing is made clearer by this framework: While debt deflation may not invert the AD curve, it plausibly steepens it, and thus flexibility in wages is somewhat less helpful than it otherwise would be. (It follows from Bayesian reasoning.) This raises the relative importance of having a demand-side price level reversion rule in monetary policy. I prefer nominal GDP level targeting, for the way it responds to supply shocks compared to pure price level targeting. (Unlike Krugman, I think it’s ultimately about income rather than inflation, as this post should have made clear.)

Indeed, nominal reversion after a recessionary shock may be the single most important reform that could prevent severe demand-side recessions from ever taking place again. Regardless, we should still seek to enable more flexibility in these situations, because history suggests it would serve to attenuate economic shocks in the most unparadoxical way.

How Astrobiology Refutes Intelligent Design: A Bayesian Approach

Perhaps the most common objection made to naturalistic evolution is that some of the steps in the development of life as we know it seem far too implausible to have occurred as the product of natural forces alone. Curators of this view advocate something called “intelligent design,” which is, in their words, “[t]he theory…that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”

Note that intelligent design is not the same as young Earth creationism. While young Earth creationism implies intelligent design, the reverse is not necessarily true. It is possible for believers of intelligent design to accept established scientific facts about the age of the Earth and the billions-year timescale over which life incrementally developed on it. What this ID perspective claims about evolution is specifically that some of its developments are too difficult to have arisen naturally, so they suggest a role for divine intervention to have ordained such “irreducibly complex” biological structures.

Whereas young Earth creationism does offer directly falsifiable (and false) claims about Earth’s geological and biological history, old Earth ID advocates make the not-so-falsifiable claim that such seemingly implausible steps in life’s evolution only came to be because they were facilitated at the behest of an intelligent designer.

That leaves us with the following two theories: (1) evolution aided by divine intervention to get beyond certain difficult junctures; (2) evolution as wholly the product of undirected natural selection, even where ostensibly implausible. So which is it? Since this question is presently unamenable to direct experimentation, I propose making use of some Bayesian inference.

For Bayesian inference, we need to take prior probabilities and revise them one way or another based on additional evidence. As far as evolution is concerned, the key to understanding how such a process that spawns complex and varied life forms is possible via an undirected natural selection is appreciating the brute enormity of the time afforded—we’re talking over four billion years that the Earth has existed for life to develop upon it. That would appear to be plenty of time for the perceivably improbable to occur naturally (in incremental steps) sooner or later.

So it is not difficult to understand how natural selection is the scientifically favored and comprehensive theory for what has driven the process of evolution. (If science has already demonstrated it to explain so many facets, why not the rest? Occam’s razor.) And so it does deserve a high prior probability on this reasoning alone.

But that point being made, it is clear that some in our society are still not convinced. So what I would like to show is that bringing to bear evidence from modern astrobiology—typically unincorporated into the evolution pedagogy—makes the case for undirected, naturalistic evolution of life even stronger than it already is.

First, consider Fermi’s Paradox. In 1950, Italian physicist Enrico Fermi pondered why, amidst the very many stars in the nearby universe which could potentially allow advanced civilization to develop, we had not seen evidence of extraterrestrial civilizations (and still have not, by 2015), nor had we been visited by extraterrestrials (dubious UFO reports notwithstanding). Hence Fermi’s famous question, “Where is everybody?”

Efforts to substantiate the answer (for which gathering empirical evidence is difficult) have been formalized into models like the Drake Equation, and more recently the Great Filter hypothesis. The idea is that one or more conditions for or successive steps in the progressive development of life—past and future—are unlikely enough as to reduce the incidence of advanced civilizations from the vast number of observable stars down to some figure that is orders of magnitude closer to zero. Considering how this galaxy could have been colonized many times over by now, and SETI has yet to ascertain any ET radio signals (the Great Silence), this suggests there are (barring decidedly inconspicuous aliens) significant filters standing athwart the development of life to a sufficiently advanced stage.

So what could be filtering against the proliferation of advanced life? To be sure, there are many candidates, and certainly multiple contributing factors. But one prominent explanation is precisely that certain innovative steps in the evolution of life are prohibitive—due to their natural, undirected origins! Even most planets with the basic requisite conditions may not be lucky enough to have a few of their molecular structures collide in just the right way in order to have obtained a novel innovation that will significantly enhance the development of life upon it. But some Earths out there would—if only by an aberration of random processes on a microscopic level.

In short, the seeming unlikelihood of the natural formation of certain life developments, along with the Great Silence—are complements to one another, and would serve to corroborate purely naturalistic evolution against the need for intelligent design arguments. Why, if there were intelligent fine-tuning for life in this universe, would we not expect to see more signs of it elsewhere? Would it not be more commonplace for divine intervention to have cleared the otherwise multitudinous challenges to it? Indeed, this lacking evidence of extraterrestrials pulls our Bayesian probabilities further in the direction of natural evolution from where they were before.

(In case anyone mistakenly thinks it cuts the other way, let me reiterate: The intuition is that the chance of naturally evolving complex biology is quite low. The fact that complex biology did arise suggests its occurrence is disproportionate to this implausibility, which is what persuades some to seek a special explanation like ID. However, the absence of ET in a surveying of the sky suggests advanced life’s propensity to occur is closer to this low probability associated with natural, non-ID evolution. So Bayesian reasoning uprates natural evolution after observing the lack of ET.)

Oh, but there’s more! Not only does Fermi’s Paradox afford the possibility that the undirected, and thus unlikely quality of evolutionary advancements accounts for the rarity of civilization, but statistical inference from the history of life’s evolution on Earth actually hints toward this explanation. The fact that key biological developments (e.g. the formation of a eukaryotic cell from prokaryotes), and ultimately civilization itself, did not occur for a significant while into Earth’s habitable timeframe suggests they may very well be low-probability events rather than inevitabilities, even on a habitable planet like Earth.

Understanding the novelty of this point requires some knowledge of statistical probability distributions. In the abstract, we can model the occurrence of the “first event” of a critical life advancement by an exponential distribution. To the uninitiated, exponential distributions look like this:

exponential_pdf

At any given instant in time, the event in question has the same constant probability of occurring. However, since our concern is with the “first” occurrence, any subsequent occurrence will become irrelevant. For this reason, the distribution of probability exhibits a frontloaded, decaying functional form.

Although exponential probability distributions are infinite, the planet’s habitable timeframe is not. So a truncated exponential distribution is what we would need to consider. For events that are more likely, this truncated distribution would be more frontloaded; but for events that are less likely, this distribution would be much flatter, approaching the shape of a uniform distribution.

Moreover, if we are considering that there are more than one such unlikely, probabilistic life developments requisite for civilization, we would want to model two or three or more iterations of these exponential distributions, each successive one within the timeframe remaining after the fruition of the preceding development. And when we combine them (forming a negative binomial distribution of sorts), we get a final probability distribution (for when civilization arises) that is backloaded near the end of the timeframe insofar as some component distributions are sufficiently near-uniform, i.e. represent difficult biological innovations that could take, on average, upwards of tens of billions of years to randomly occur on the planet.

And this is more or less what our own civilization’s timing in the Earth’s habitable timeframe suggests. Earth’s habitability is believed to have begun around 3.9 billion years ago, give or take; it will become unable to sustain complex life as soon as 0.8 billion years in the future as the Sun increases in luminosity, heating up the Earth and throttling the carbon cycle. Standardized on a scale of 0 to 1, our existence as a civilization is around 0.83 in the window, which lends credence to a backloaded distribution, seemingly indicative of there having been one or more low-probability steps in the development of advanced life, which were completed in great measure out of sheer luck.

Note that what I outline here is a basic model for conceptual elucidation. In a more thorough scientific analysis, I would advise additional considerations: (1) the effects of time-correlated variables like solar luminosity, oxygen level, and temperature in biasing the fecundity of life across the period; (2) climate variation rather than biological random walk may account for some of the stochastic nature of life’s progression; (3) “easy steps” (not unlikely, but possibly prolonged as a continuum of imminent steps) should be factored into a more robust model, in which stochastic steps would be mapped on the conjoined intermittent time for statistical inference.

As any good statistician would acknowledge, the limitations of this sample are such that we cannot know what the large-sample parameters actually are based on this information alone. But it would be careless to think our observation is not useful at all! In fact, it is very useful in persuading for or against our priors; here, the distribution of apparent “hard steps” and the fruition of civilization relative to the timeframe of Earth’s habitable history persuades us to further update our Bayesian probabilities in favor of advanced life being a low-probability, chance development, even on a habitable planet—i.e. the result of a purely natural evolution.

And this obviates the need for any “God of the gaps” intelligent design theory to explain why sophisticated life arose through intuitively difficult steps, given that it actually did arise with a timing in history and a rarity in the stars suggestive of being a function of low-probability, random variables. It is thus not without irony that ID proponents would point to the null evidence of extraterrestrial civilization, when actually this rarity of civilization corroborates evolution as an undirected process which can only surpass the odds to reach advanced life in few and far between places throughout the universe.

Those who have a firm intellectual grasp on natural selection have long been able to appreciate the sheer magnitude of evolutionary timescales which would allow intuitively unlikely biological innovations to eventually occur. However, observer selection bias amidst time-limited planetary habitability means that even our evolutionary timescale is more likely than not to be an underestimate of how long it would truly take, on average, for such developments to naturally arise on an indefinitely habitable Earth-like planet. Many advocates of natural evolution, including astronomer Carl Sagan, have seemed to regard the evolution of life on Earth as an inevitability of sorts. They have failed to realize the clues to just how unlikely it probably was.

Not only can difficult steps in evolution help us make sense out of the Great Silence and the spaced, filled-out timing of life’s base developments relative to the habitable window, but it also helps to explain why so many people still find natural evolution to be implausible in certain aspects. Intuitively, the unguided, natural formation of complex biological structures seems hard, but that’s only because it is hard—whether that means 5 billion or 500 billion expected years for intelligent life to evolve. Yet it is only because we beat the low odds that we are here at all to observe our fortuitous history. The sheer vastness, inhospitability, and superfluity of space and time with respect to our civilization’s locale loudly testifies that we are a random aberration of complexity amidst an entropic universe.

There are many things more inevitable than advanced life, and one is this—that advanced life in this universe will want to ascribe to higher intelligence the deliberate creation of such complex, biological structures, given the extraordinary unlikelihood of its natural, chance arrival as intuitively grasped by intelligent beings still limited in their perceptions of the truly immense depth of space and time and the rare possibilities it affords on sparse occasion throughout.

Indeed, intelligent design explanations of life’s origins would be more persuasive in a world so very limited in space and time. But if we have arisen by sheer luck against daunting odds, an old, vast, and ostensibly uncivilized universe is exactly what we would expect to observe in our surroundings. Given the abundance of space and time, and the null evidence of any other advanced life within the current limits of our detection, we are left with no good reason to conclude anything else.