Financial Acceleration and the Paradox of Flexibility

Macroeconomic models are rigorous, but simple. While their simplicity may lend easier interpretation to otherwise opaque economic phenomena, they often fail to capture key microfoundational realities of the economy which render it different from what the models would ascertain. And for some time, one of the features most crucially lacking from these models was any incorporation of the real-world frictions posed by financial intermediation.

Toward this end, a “financial accelerator” mechanism was adapted into the canon of modern macroeconomics, most notably by economists Ben Bernanke, Mark Gertler, and Simon Gilchrist in 1996. The financial accelerator is meant to make sense out of the recurrent observation that relatively large shocks to the economy seem to originate from proportionally smaller perturbations in the financial markets—hence the need for some sort of acceleration process which magnifies financial fluctuations into broader macroeconomic volatility.

The proposed financial accelerator model works like this: First, there is a change in asset values. This change affects borrowers’ cost of external finance, because in a world of asymmetric information, lenders are risking their money to default if loans are not fully collateralized, so a risk premium must be charged to the extent that collateral is not available. So when there is a decline in the value of assets available for collateral, it raises the cost of external financing, which depresses spending in the economy, which imparts further decline in asset values, and so on in an adverse spiral.

This concept, although not formalized until fairly recently, was not entirely unfathomed by earlier thinkers. In 1933, economist Irving Fisher articulated the related concept of “debt deflation” in order to explain the severity of the Great Depression. His idea was that in a state of over-indebtedness, distress selling leads to falling asset prices, which up against nominally-fixed debt obligations will cause declining net worth, leading to a contraction of spending as borrowing is constrained and saving is attempted to repair balance sheets, and profits fall given that price deflation presses against debt service costs, resulting in a precipitous decline in output in a vicious cycle that only makes itself worse until enough debtors default that the economy can start to recover.

To be clear, the argument is not that any increased (decreased) propensity to save (borrow) is inherently recessionary or prone to trigger such a spiral. Interest rates are the key price mechanism that can adjust up or down to equilibrate saving and borrowing; it is when interest rates are not allowed to fall enough, or they are at the zero lower bound (ZLB) and can go no lower, that an indebted, deflationary spiral can produce a recession in output. And it is this ZLB—or otherwise binding floor on interest rates—situation which has inspired a more recent assertion about the nature of aggregate demand in such a predicament.

Recall that in a New Keynesian economic framework, the aggregate demand (AD) curve represents the quantity of real output demanded at each price level. Likewise, the aggregate supply (AS) curve represents the quantity of real output supplied at each price level.

090813krugman1-blog480

Unlike micro-level demand curves, which slope downward mainly due to micro-level substitution effects (but also income effects), the AD curve is conventionally downward sloping for three reasons: (1) the real balances effect, which is the wealth effect in cash—i.e. people spend more when they have more, as well as the inverse, and the price level determines the real value of this cash; (2) the interest rate effect, which is that a higher price level reduces the real value of savings, which raises interest rates and depresses spending (less borrowing, more saving), or the inverse; (3) the exchange rate effect, which is how a higher price level makes domestic goods more expensive and foreign goods cheaper, thus reducing exports and raising imports to decrease GDP (or the inverse).

In reality, it is the second effect—the interest rate effect—which is the most quantitatively significant for an economy like the US. And it is when interest rates get stuck at zero that New Keynesian economists Paul Krugman and Gauti Eggertsson argue that the AD curve becomes upward sloping due to the practical inability of nominal interest rates to be lowered when they are at zero, and the increasing burden of nominally-fixed debt as price deflation ensues. This is most likely the case, Krugman argues, given that the real balances effect on spending (from increased value of cash via deflation) is not very big compared to the effect of real interest rates from deflation at the ZLB.

090813krugman2-blog480

Note that this upward sloping AD curve would not apply when recovery inflation is expected to compensate for the below-trend deflation, for that would have the effect of lowering expected real interest rates. But when lower inflation or deflation manifests as a permanent reduction in the price level, interest rates cannot go low enough to circumvent the debt deflation problem. Since the high real interest rates (i.e. lower prices in the future) encourage creditors to save the payments they receive, a spiral of self-fulfilling deflationary hoarding is the result, leading to a deep and prolonged recessionary slump.

This is the thrust of Krugman and Eggertsson’s 2010 paper titled “Debt, Deleveraging, and the Liquidity Trap.” The putatively backward-bending AD curve in the floor-bound interest rate environment leads to some counterintuitive conclusions about supply-side phenomena. This includes the “paradox of flexibility,” which suggests that removing downward nominal rigidity would actually make the recession worse because it would raise real interest rates and the real value of debt. (You thought Keynesians were all about sticky wages? Not so fast!) It also includes the “paradox of toil,” which suggests that an increased propensity to work (represented by an outward shift of AS) would actually lead to less employment for basically the same reason.

Broadly interpreted, the unconventional, upward sloping AD curve implies that positive supply shocks reduce output and negative supply shocks raise output. Both of these paradoxes are supplements to the already well-known “paradox of thrift,” which suggests that attempts to save in a ZLB environment (or alternatively, a “liquidity trap“) depress income and actually lead to less saving in the aggregate.

Krugman and Eggertsson’s model has led them to suggest that wage cuts, oil shocks, and the like may not have the effect that they are usually believed to have in liquidity trap conditions. This is a modern formulation—one that Keynes did not apparently believe when he insisted that FDR’s fixing of prices and wages under NIRA “probably impedes Recovery.” (However, Keynes had also argued that aggregate wage cuts would not help.) The backward AD curve would imply that adverse restrictions on supply—downward nominal wage rigidity, higher commodity prices, etc.—are abnormally promotional of output under these special economic circumstances.

So, are Krugman and Eggertsson right? Is aggregate demand upward sloping under debt deflation and floor-bound interest rates, with all its implications for the supply side? I am doubtful. This may be one of those cases, as in the 1970s with inflation expectations, in which a Keynesian model comes up short for an inadequate account of microfoundations. Several factors come to mind that contravene their conclusion:

1) Wage cuts could serve as an alternative to unemployment. Obviously this could do more to preserve output on the micro level, but on the macro level? Assuming firms’ labor demand schedules are sufficiently elastic, it could also help preserve output on the macro level insofar as it improves total labor income through less unemployment, which would promote consumer spending.

2) Debt is burdensome in proportion to income, not prices. And if wage flexibility promotes income, it would effectively ease the aggregate debt burden. Sure, there may be fewer defaults to discharge debt with shared cuts instead of acute unemployment, but to the extent that households are risk-averse, they would feel less pressure to deleverage, and lenders would have less need to assign a higher risk premium associated with potential default if wage cuts mitigate unemployment.

(In addition to high oil prices and interest rates, an unemployment shock could help explain that “Minsky moment” which effectively lowered the ceiling on acceptable leverage.)

3) Krugman and Eggertsson do not effectively model investment or its returns. Wage flexibility could raise the marginal efficiency of capital, to the extent that a higher payroll keeps capital producing output, and wage flexibility improves employers’ (~capital owners’) profit margins on output. Because asset prices are derived from discounted returns to capital, this could mitigate asset price deflation and actually dampen the financial accelerator mechanism.

Of course, it’s possible that the effect of higher real interest rates from deflation amidst debt overhang could be greater than the effects I suggest. It could be the case that firm-level labor demand is not elastic enough in partial equilibrium, that household balance sheet behavior is not that risk-averse, that profit margins are not much improved by flexibility, and higher real interest rates are the bigger factor. But the a priori reasoning here only suggests ambiguity, and the positive economic reality can only be ascertained through empirical evidence. We do not know that AD is upward sloping just because interest rates are stuck, real balance effects are small, and debt burdens abound.

What would answer the question is evidence on the impact of supply-side phenomena during periods when interest rates are floor-bound (i.e. supply shocks in a liquidity trap). To that end, here is the historical data showing inverted real wages and output during the Great Depression:

Screen-Shot-2013-01-30-at-8_19_49-AM

As Scott Sumner explains, a huge part of the variation in output during the Great Depression is explained by real wages. When wages were hiked, economic growth stalled—especially in 1937-38 after the Wagner Act had granted significant power for unions to coerce wage hikes. Assuming the Great Depression qualifies as having liquidity trap conditions that could alter the slope of AD (which some may dispute, but Krugman would think so), this is powerful evidence against the view that negative supply shocks are expansionary, and so maybe AD does not readily become upward sloping after all.

So until Krugman, Eggertsson, or anyone can provide a survey of historical evidence to suggest that their upward sloping AD model comes true in practice, it should be safe to assume it does not. And in the meantime, positive supply shocks and wage/price flexibility should be welcomed. It’s true that downward rigidity is, to a great extent, an inexorable reality of the market; moreover, price stickiness in moderation may have some secular benefits. Notwithstanding that, price flexibility is a lubricant necessary to make markets function well, and public policy should not adversely encourage more rigidity through suboptimal safety net and minimum wage policies, and the like.

But one thing is made clearer by this framework: While debt deflation may not invert the AD curve, it plausibly steepens it, and thus flexibility in wages is somewhat less helpful than it otherwise would be. (It follows from Bayesian reasoning.) This raises the relative importance of having a demand-side price level reversion rule in monetary policy. I prefer nominal GDP level targeting, for the way it responds to supply shocks compared to pure price level targeting. (Unlike Krugman, I think it’s ultimately about income rather than inflation, as this post should have made clear.)

Indeed, nominal reversion after a recessionary shock may be the single most important reform that could prevent severe demand-side recessions from ever taking place again. Regardless, we should still seek to enable more flexibility in these situations, because history suggests it would serve to attenuate economic shocks in the most unparadoxical way.

How Astrobiology Refutes Intelligent Design: A Bayesian Approach

Perhaps the most common objection made to naturalistic evolution is that some of the steps in the development of life as we know it seem far too implausible to have occurred as the product of natural forces alone. Curators of this view advocate something called “intelligent design,” which is, in their words, “[t]he theory…that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.”

Note that intelligent design is not the same as young Earth creationism. While young Earth creationism implies intelligent design, the reverse is not necessarily true. It is possible for believers of intelligent design to accept established scientific facts about the age of the Earth and the billions-year timescale over which life incrementally developed on it. What this ID perspective claims about evolution is specifically that some of its developments are too difficult to have arisen naturally, so they suggest a role for divine intervention to have ordained such “irreducibly complex” biological structures.

Whereas young Earth creationism does offer directly falsifiable (and false) claims about Earth’s geological and biological history, old Earth ID advocates make the not-so-falsifiable claim that such seemingly implausible steps in life’s evolution only came to be because they were facilitated at the behest of an intelligent designer.

That leaves us with the following two theories: (1) evolution aided by divine intervention to get beyond certain difficult junctures; (2) evolution as wholly the product of undirected natural selection, even where ostensibly implausible. So which is it? Since this question is presently unamenable to direct experimentation, I propose making use of some Bayesian inference.

For Bayesian inference, we need to take prior probabilities and revise them one way or another based on additional evidence. As far as evolution is concerned, the key to understanding how such a process that spawns complex and varied life forms is possible via an undirected natural selection is appreciating the brute enormity of the time afforded—we’re talking over four billion years that the Earth has existed for life to develop upon it. That would appear to be plenty of time for the perceivably improbable to occur naturally (in incremental steps) sooner or later.

So it is not difficult to understand how natural selection is the scientifically favored and comprehensive theory for what has driven the process of evolution. (If science has already demonstrated it to explain so many facets, why not the rest? Occam’s razor.) And so it does deserve a high prior probability on this reasoning alone.

But that point being made, it is clear that some in our society are still not convinced. So what I would like to show is that bringing to bear evidence from modern astrobiology—typically unincorporated into the evolution pedagogy—makes the case for undirected, naturalistic evolution of life even stronger than it already is.

First, consider Fermi’s Paradox. In 1950, Italian physicist Enrico Fermi pondered why, amidst the very many stars in the nearby universe which could potentially allow advanced civilization to develop, we had not seen evidence of extraterrestrial civilizations (and still have not, by 2015), nor had we been visited by extraterrestrials (dubious UFO reports notwithstanding). Hence Fermi’s famous question, “Where is everybody?”

Efforts to substantiate the answer (for which gathering empirical evidence is difficult) have been formalized into models like the Drake Equation, and more recently the Great Filter hypothesis. The idea is that one or more conditions for or successive steps in the progressive development of life—past and future—are unlikely enough as to reduce the incidence of advanced civilizations from the vast number of observable stars down to some figure that is orders of magnitude closer to zero. Considering how this galaxy could have been colonized many times over by now, and SETI has yet to ascertain any ET radio signals (the Great Silence), this suggests there are (barring decidedly inconspicuous aliens) significant filters standing athwart the development of life to a sufficiently advanced stage.

So what could be filtering against the proliferation of advanced life? To be sure, there are many candidates, and certainly multiple contributing factors. But one prominent explanation is precisely that certain innovative steps in the evolution of life are prohibitive—due to their natural, undirected origins! Even most planets with the basic requisite conditions may not be lucky enough to have a few of their molecular structures collide in just the right way in order to have obtained a novel innovation that will significantly enhance the development of life upon it. But some Earths out there would—if only by an aberration of random processes on a microscopic level.

In short, the seeming unlikelihood of the natural formation of certain life developments, along with the Great Silence—are complements to one another, and would serve to corroborate purely naturalistic evolution against the need for intelligent design arguments. Why, if there were intelligent fine-tuning for life in this universe, would we not expect to see more signs of it elsewhere? Would it not be more commonplace for divine intervention to have cleared the otherwise multitudinous challenges to it? Indeed, this lacking evidence of extraterrestrials pulls our Bayesian probabilities further in the direction of natural evolution from where they were before.

(In case anyone mistakenly thinks it cuts the other way, let me reiterate: The intuition is that the chance of naturally evolving complex biology is quite low. The fact that complex biology did arise suggests its occurrence is disproportionate to this implausibility, which is what persuades some to seek a special explanation like ID. However, the absence of ET in a surveying of the sky suggests advanced life’s propensity to occur is closer to this low probability associated with natural, non-ID evolution. So Bayesian reasoning uprates natural evolution after observing the lack of ET.)

Oh, but there’s more! Not only does Fermi’s Paradox afford the possibility that the undirected, and thus unlikely quality of evolutionary advancements accounts for the rarity of civilization, but statistical inference from the history of life’s evolution on Earth actually hints toward this explanation. The fact that key biological developments (e.g. the formation of a eukaryotic cell from prokaryotes), and ultimately civilization itself, did not occur for a significant while into Earth’s habitable timeframe suggests they may very well be low-probability events rather than inevitabilities, even on a habitable planet like Earth.

Understanding the novelty of this point requires some knowledge of statistical probability distributions. In the abstract, we can model the occurrence of the “first event” of a critical life advancement by an exponential distribution. To the uninitiated, exponential distributions look like this:

exponential_pdf

At any given instant in time, the event in question has the same constant probability of occurring. However, since our concern is with the “first” occurrence, any subsequent occurrence will become irrelevant. For this reason, the distribution of probability exhibits a frontloaded, decaying functional form.

Although exponential probability distributions are infinite, the planet’s habitable timeframe is not. So a truncated exponential distribution is what we would need to consider. For events that are more likely, this truncated distribution would be more frontloaded; but for events that are less likely, this distribution would be much flatter, approaching the shape of a uniform distribution.

Moreover, if we are considering that there are more than one such unlikely, probabilistic life developments requisite for civilization, we would want to model two or three or more iterations of these exponential distributions, each successive one within the timeframe remaining after the fruition of the preceding development. And when we combine them (forming a negative binomial distribution of sorts), we get a final probability distribution (for when civilization arises) that is backloaded near the end of the timeframe insofar as some component distributions are sufficiently near-uniform, i.e. represent difficult biological innovations that could take, on average, upwards of tens of billions of years to randomly occur on the planet.

And this is more or less what our own civilization’s timing in the Earth’s habitable timeframe suggests. Earth’s habitability is believed to have begun around 3.9 billion years ago, give or take; it will become unable to sustain complex life as soon as 0.8 billion years in the future as the Sun increases in luminosity, heating up the Earth and throttling the carbon cycle. Standardized on a scale of 0 to 1, our existence as a civilization is around 0.83 in the window, which lends credence to a backloaded distribution, seemingly indicative of there having been one or more low-probability steps in the development of advanced life, which were completed in great measure out of sheer luck.

Note that what I outline here is a basic model for conceptual elucidation. In a more thorough scientific analysis, I would advise additional considerations: (1) the effects of time-correlated variables like solar luminosity, oxygen level, and temperature in biasing the fecundity of life across the period; (2) climate variation rather than biological random walk may account for some of the stochastic nature of life’s progression; (3) “easy steps” (not unlikely, but possibly prolonged as a continuum of imminent steps) should be factored into a more robust model, in which stochastic steps would be mapped on the conjoined intermittent time for statistical inference.

As any good statistician would acknowledge, the limitations of this sample are such that we cannot know what the large-sample parameters actually are based on this information alone. But it would be careless to think our observation is not useful at all! In fact, it is very useful in persuading for or against our priors; here, the distribution of apparent “hard steps” and the fruition of civilization relative to the timeframe of Earth’s habitable history persuades us to further update our Bayesian probabilities in favor of advanced life being a low-probability, chance development, even on a habitable planet—i.e. the result of a purely natural evolution.

And this obviates the need for any “God of the gaps” intelligent design theory to explain why sophisticated life arose through intuitively difficult steps, given that it actually did arise with a timing in history and a rarity in the stars suggestive of being a function of low-probability, random variables. It is thus not without irony that ID proponents would point to the null evidence of extraterrestrial civilization, when actually this rarity of civilization corroborates evolution as an undirected process which can only surpass the odds to reach advanced life in few and far between places throughout the universe.

Those who have a firm intellectual grasp on natural selection have long been able to appreciate the sheer magnitude of evolutionary timescales which would allow intuitively unlikely biological innovations to eventually occur. However, observer selection bias amidst time-limited planetary habitability means that even our evolutionary timescale is more likely than not to be an underestimate of how long it would truly take, on average, for such developments to naturally arise on an indefinitely habitable Earth-like planet. Many advocates of natural evolution, including astronomer Carl Sagan, have seemed to regard the evolution of life on Earth as an inevitability of sorts. They have failed to realize the clues to just how unlikely it probably was.

Not only can difficult steps in evolution help us make sense out of the Great Silence and the spaced, filled-out timing of life’s base developments relative to the habitable window, but it also helps to explain why so many people still find natural evolution to be implausible in certain aspects. Intuitively, the unguided, natural formation of complex biological structures seems hard, but that’s only because it is hard—whether that means 5 billion or 500 billion expected years for intelligent life to evolve. Yet it is only because we beat the low odds that we are here at all to observe our fortuitous history. The sheer vastness, inhospitability, and superfluity of space and time with respect to our civilization’s locale loudly testifies that we are a random aberration of complexity amidst an entropic universe.

There are many things more inevitable than advanced life, and one is this—that advanced life in this universe will want to ascribe to higher intelligence the deliberate creation of such complex, biological structures, given the extraordinary unlikelihood of its natural, chance arrival as intuitively grasped by intelligent beings still limited in their perceptions of the truly immense depth of space and time and the rare possibilities it affords on sparse occasion throughout.

Indeed, intelligent design explanations of life’s origins would be more persuasive in a world so very limited in space and time. But if we have arisen by sheer luck against daunting odds, an old, vast, and ostensibly uncivilized universe is exactly what we would expect to observe in our surroundings. Given the abundance of space and time, and the null evidence of any other advanced life within the current limits of our detection, we are left with no good reason to conclude anything else.

 

Epistemology of the Iraq War

An interesting development in the Republican presidential field over the past week or so has been the hasty emergence of a consensus, following Jeb Bush’s interview fumble with Megyn Kelly, that the invasion of Iraq was retrospectively a mistake, having been predicated on faulty intelligence.

For Iraq War cynics with an interest in the Republican Party, this is a positive development. But it is also a rather superficial one, in that it does not go so far as to reassess the decision rules which are used to implement foreign policy in a world of limited information.

Consider the faulty intelligence. With respect to the 2003 invasion, it is not enough to note that the intelligence failed regarding WMDs, for the possibility of intelligence failure is something which must be considered a priori. On the one hand, intelligence which suggests the existence of an active WMD program in Iraq could be a false positive; alternatively, intelligence suggesting there is no ongoing WMD effort may be a false negative. This potential for intelligence failure attenuates the correspondence of intelligence conclusions with real-world truth, and so any concomitant foreign policy decision needs to be humbled by this.

Similar considerations follow from other factors. For one, the Iraq War has testified to the reality that nation-building is laden with pitfalls, especially in such a volatile and fractious region of the world. Another is the potential for the incoherence of future foreign policy decisions to diminish the benefits of the invasion. Examples arguably include (1) the Bush Administration’s 2003 refusal of negotiations with Iran, (2) NATO’s 2011 air campaign against the Libyan regime which had relinquished its WMD program months after the Iraq invasion, and (3) the Obama Administration’s failure to extend a Status of Forces Agreement, unlike what the Bush Administration had anticipated.

Considering all the ways in which the hypothesized net benefits of an invasion and occupation of this sort are easily depreciated, the real question that ought to be asked is not what Megyn Kelly asked Jeb Bush about “what we know now.” It’s about the decision rules that, in hindsight, ought to have been applied given what was believed then, because in a world of limited information, we will likely not know everything that we will come to know and will prefer to have known before.

Instead of justifying the invasion because “the intelligence says active WMD program,” the possibility of erroneous intelligence deserved stronger consideration then.

Instead of justifying the invasion because “it will promote non-proliferation of WMDs,” the possibility of this benefit being stymied by future foreign policy decisions deserved stronger consideration then.

Instead of justifying the invasion because “we can maintain a troop presence for long enough,” the possibility of troops being withdrawn out of political fatigue deserved stronger consideration then.

Now, it’s possible that these objections were considered at the time, and they ultimately did not weigh enough against the factors which motivated the Bush Administration to wage war in Iraq. And indeed, it is in part the experience of the past twelve years that has persuaded me to conclude with as much confidence as I have that these merited more consideration then. (Of course, let’s not forget there were those who had enough foresight to have disapproved of the war before it unfolded.)

The crucial distinction here is between particulars and priors. It may seem subtle, but it’s important. You see, it is not enough to say that in reflection, the particular prevailing beliefs in 2002 were in error. Those who steer foreign policy need to go beyond that to updating their priors, such that the same particulars being confronted today would be processed through new criteria to meet an updated decision rule.

Ultimately, it is the experience of the Iraq War which suggests that the decision made in 2002 was quite arguably inappropriate, not just based on today’s knowledge, but based on the information promulgated back in 2002. It’s not that the intelligence just so happened to be wrong; it’s that the intelligence had the potential for error, nation-building in the sectarian Middle East is an inherently arduous exercise, and so on. Learning from this experience compels reasonable people to conclude that, through the lens of our updated prior beliefs of the world and revised decision rules for making war, a different decision today based solely on information resembling what was believed then could very well be preferable.

And it is this consequential point which most Republican candidates are failing to address.

donaldrumsfeldquote

The Consequential Saving-Investment Gap

Now that he’s retired from his eight-year stint as chairman of the Federal Reserve, Ben Bernanke is working as a “Distinguished Fellow” at the Brookings Institution, and has recently started his own blog on their website. Regardless of what you may think about monetary policy under his tenure as Fed chairman (I know I have my reservations), his recent post, “Why are interest rates so low?” does clear up some common misconceptions about interest rates and the interpretation thereof relevant to the stance of monetary policy. Here’s Bernanke:

If you asked the person in the street, “Why are interest rates so low?”, he or she would likely answer that the Fed is keeping them low. That’s true only in a very narrow sense. The Fed does…set the benchmark nominal short-term interest rate. The Fed’s policies are also the primary determinant of inflation…and inflation trends affect interest rates…The Fed’s ability to affect real [inflation-adjusted] rates of return, especially longer-term real rates, is transitory and limited. Except in the short run, real interest rates are determined by a wide range of economic factors, including prospects for economic growth—not by the Fed.

To understand why this is so, it helps to introduce the concept of the equilibrium [Wicksellian] real interest rate…the real interest rate consistent with full employment of labor and capital resources, perhaps after some period of adjustment. Many factors affect the equilibrium rate, which can and does change over time. In a rapidly growing, dynamic economy, we would expect the equilibrium interest rate to be high, all else equal, reflecting the high prospective return on capital investments. In a slowly growing or recessionary economy, the equilibrium real rate is likely to be low, since investment opportunities are limited and relatively unprofitable. Government spending and taxation policies also affect the equilibrium real rate…because government borrowing diverts savings away from private investment.

If the Fed wants to see full employment of capital and labor resources…its task amounts to…push[ing] those rates toward levels consistent with…its best estimate of the equilibrium rate…If the Fed were to try to keep market rates persistently too high, relative to the equilibrium rate, the economy would slow (perhaps falling into recession), because capital investments (and other long-lived purchases, like consumer durables) are unattractive when the cost of borrowing set by the Fed exceeds the potential return on those investments. Similarly, if the Fed were to push market rates too low…the economy would eventually overheat, leading to inflation…The bottom line is that the state of the economy…ultimately determines the real rate of return attainable by savers and investors. The Fed influences market rates but not in an unconstrained way…

This sounds very textbook-y, but failure to understand this point has led to some confused critiques of Fed policy.

Exactly right. When we casually examine where the Fed has set interest rates over time, it’s important to recognize that the Fed does not set these rates in a vacuum. The optimal interest rate is not one that necessarily remains steady over time, but can and does fluctuate based on economic conditions. In a weaker economy, lower investment demand and an elevated propensity to save can naturally be expected to lower the equilibrium interest rate. But because the Fed targets a key interest rate—the federal funds rate, an overnight interbank lending rate—to mediate its provision of the monetary base, it must make deliberate adjustments to its own interest rate target. Given that the Fed has control over the monetary base, it must influence short-term interest rates one way or another. It does not make much conceptual sense for an extant Fed to “do nothing” when, by its very existence, it is doing something to the monetary base (and, by proximity, interest rates) even if that means not changing it at all. (It wouldn’t make sense for a state-run oil monopoly with market power to “do nothing” with respect to oil prices in its market, would it?)

This means that as the Fed targets a lower and lower interest rate during a recession, it is not necessarily the case that the Fed is pushing interest rates too low. Interest rates could be too low, too high, or just right, but we should expect them to fall in that situation. (And vice versa for raising interest rates in a recovering economy.) Now, the fact that the Fed has to make certain adjustments may suggest that it has previously erred with respect to its interest rate decisions and is playing catch-up (for instance, by leaving interest rates too low and then hiking them swiftly to belatedly fight off inflation). But the bottom line is that just because the nominal interest rate is historically low, or even zero, does not necessarily mean that the real interest rate is too low relative to the equilibrium/natural/Wicksellian interest rate.

In fact, estimations of this rate have suggested that during recent years, interest rates should have gone even lower, despite being at zero. But how could it be so low? In one interpretation, the natural rate of interest is that which equilibrates real savings, or underconsumption, with real investment. And consider that as a result of the recession, net private savings surged while net (domestic) investment fell, prompting the considerable saving-investment gap seen on the right:

savingandinvestment

This notion, loosely illustrated above, has important implications for Fed policy. The tendency of people (in reality, mostly firms) to increase money balances by sequestering more income as uninvested savings will, left unchecked, contract spending (and, by identity, income), which will in turn contract real output given the nominal rigidities which are undoubtedly present in the economy. And given the elevated propensity to build money reserves amidst the uncertainty in the wake of the financial crisis and a struggling economy, the Fed should accommodate such balance sheet deleveraging by expanding the monetary base.

And it has, dramatically. But at the zero lower bound, there is difficulty in making these efforts fully efficacious because nominal rates cannot go lower to deter excessive saving and encourage investment. That is why some argue that price inflation should have gone higher during the recovery to push real interest rates lower and support this process–of course, until the stronger economy raises the equilibrium rate back up again. (If that sounds too artificial, just interpret it as compensating for the lower inflation that was experienced due to the recession.)

Because ultimately, it is this saving-investment relation which is the harbinger of most economic fluctuations. Just consider its close correspondence to the gap between realized and potential GDP:

savinginvestmentoutputgap

Now, contingent on how exactly the CBO estimates potential GDP, I should caution that its calculation could possibly produce some dependency by construction (I’m not familiar enough to be sure). Notwithstanding that, if there was ever any doubt that business cycles are largely a monetary phenomenon, this should clear it up.

In my view, Bernanke’s interpretation of current low interest rates likely gives too much weight to a long-term trend of excess global savings, as his follow-up post suggests. His chart of interest rates on 10-year Treasury bonds juxtaposed with CPI inflation omits the reduction in the risk of inflation, etc. which rise with duration, i.e. the “term premium” (more on that here). (Update: Bernanke added a new post on the term premium.) But with respect to the recent situation (which is improving–the economy made solid gains in 2014), the point saliently stands. Improving our understanding of just how important the saving-investment relationship is could surely serve, through the opinions of those who circumscribe monetary policy, as a benefactor of greater economic stability.

Some Critical Synopsizing of Piketty

Here’s a new YouTube video presentation I put together, mostly taking a critical view of Thomas Piketty‘s Capital in the 21st Century. I include a list of helpful reviews of that book in the description of the video. HT to Larry Summers, Matt Rognlie, and Alan Reynolds for providing what I found to be, taken together, the most panoramic and consequential triad of commentary on the subject.

Gender Is Not Just a Social Construct

Anyone who has ever grazed the subject of sociology is surely familiar with the claim that “gender is a social construct.” That is, those contrasts we contemporarily observe between the social situations and behaviors of men and women are not predicated by innate differences in biological hardwiring, but are instead relics of longstanding social norms and institutions that are not intrinsic to our nature (or so it is claimed). While there is invariably a role for nurture to play in all social circumstances, including gender differences, it seems to me that the strong view of “gender as [almost entirely] a social construct” is naïve to the rudimentary insights of evolutionary biology.

Consider Bateman’s principle. This is the well-established biological observation that sexually reproductive species exhibit different variances in reproduction rates within each sex, resulting from the necessity of prenatal development of offspring in one sex and not the other. In most species, reproductive success (RS) among males varies more greatly than it does among females, consequent of the female’s role in carrying pregnancies. Whereas a female’s reproduction is bounded by physiological constraints on the number of her gestations, a male’s reproduction is bounded in no such way, and is only limited by as many females as he can impregnate.

Naturally, this leads to more competition among the males of a species for the limited reproductive capacities of their female counterparts, seeing as a successful male can potentially father offspring with many more females than a female can with different males. As a result of the differences in RS variance between the sexes, this would lead evolution to naturally select on them in dimorphic ways. For instance, while females are comparatively rewarded for caring for their limited number of offspring, males are comparatively rewarded for aggressive and promiscuous behavior which gets them reproductive access to a larger number of females.

What these dimorphic pressures entail is that the males of a species can be predicted to surpass females in the expression of those attributes which are more intensively selected upon by higher male RS variance. Conversely, females can be expected to outperform males on those traits for which females are comparatively more selected upon. And in reality, this is what we observe. Consider:

And:

  • Women perform cognitively better in measures related to social intelligence
  • Women are frequently targeted by microfinance initiatives in developing countries because they are more reliable in spending resources on their children than men

Many of these characteristics epitomize gender stereotypes with which we are all too familiar. While such differences are not self-proving to be of a natural origin, the fact that these are so consistent with what could be predicted from dimorphic selection pressures resultant of sex differences in RS variance is unlikely to be a mere coincidence.

In fact, a relevant implication could be inferred from this sexual dimorphism vis-à-vis social hierarchy. For one, women are inclined to seek quality men who are better situated to provide for their offspring; what’s more, men are inclined to situate themselves where they can more readily acquire reproductive access to women. These both suggest that men may have a comparatively greater incentive to pursue positions of high social rank than women do. In other words, Bateman’s principle could suggest that males will disproportionately occupy the upper echelons of social hierarchies.

To be sure, this is not always the case. There is plenty of nuance to be had in how Bateman’s principle is observed in practice. Nonetheless, the principle itself of divergent RS variances remains as a very consequential observation in biology. While we should not rule out that male preponderance in leadership roles could have some antecedents that are socially constructed, it is of interest to recognize that such male preponderance is not inconsistent with a natural explanation deduced from basic evolutionary biology. Actually, it should be stressed that natural versus social constructs ought not be dealt with as if they are bifurcated processes acting independently of one another. Natural preconditions undoubtedly play an important role in creating complementary social norms and institutions! More subtly, these social constructs may even exert selective pressures on the population over time.

So what should our takeaway be from all of this? When it comes to drawing political or cultural implications from what we conclude about the originators of gender differences, we must distinguish between the positive (what is) and the normative (what ought to be). If certain gender differences have a significant natural component, and if this has precipitated a preponderance of men in positions of power, does this mean that men should exclusively dominate in these roles? Of course not!

Nature is nothing to idolize. Evolution has shaped us not with concern for our happiness or suffering, but for our fitness for survival and reproduction. What is remarkable about our situation as an intelligent species is that we can reject our nature, using social constructs to realign our own behaviors and environment to fit our utilitarian interests instead of our evolutionary directives.

The problem with the incessant faulting of social constructs for most social problems is that it sometimes purports that humanity’s natural state absent these constructs is a purer form of good. I am more inclined to believe something of the opposite sort: that a human in the state of nature is perhaps not evil, but interested in oneself and those of kin to the point where others’ interests may be violently disposed of in expediency.

Thank goodness for social constructs which have been working to eradicate this cruel disrespect for the interests of others. Thank goodness for social constructs which have been working to curtail the incidence of such evils as violence and rape. Thank goodness that agrarian economies are finally going by the wayside, so that male economic dominion from superior physical strength can give way to more gender egalitarianism (among other blessings of industrialization). And thank goodness for the social construct of capitalism, which has rerouted self-interests to become mutual, and in doing so has liberated billions of human beings from their natural state of poverty.

Above all, thank goodness for social constructs which suppress the worse parts of our nature which are at odds with an enlightened understanding of what is morally decent and utilitarian. When it comes to something like gender disparities and the lament thereof, let us not perceive them merely as malicious social constructs which must be banished in the interest of an otherwise sensible and egalitarian nature; rather, let us recognize them more as the manifestations of our natural predispositions, the ill parts of which only well-devised social constructs can properly redress.

Murder Is Not Inherently Irrational

With the fruition of the latest multiple victim homicidal rampage, the media are abuzz with social discourse analyzing the Isla Vista killer’s problems and pontificating about solutions for eradicating such violent tragedies as the one his unfortunate life culminated in.

Among the usual suspects for scapegoating is “mental illness,” with the phrase nearly serving as a stand-in for any twisted bundle of personal problems or motivations which are not readily understood by us mentally healthy people. What follows here is not to discount that some forms of mental illness may play a role in fomenting instances of multiple victim homicides. Sure, it is easy to recognize in his autobiographical “manifesto” that he had developed a number of psychological issues, and these problems may or may not qualify as “mental illness.” Rather, my concern is that too many people are faulting “mental illness” in a manner that serves to obscure, rather than elucidate, the real factors which underscore the violent outbursts of famed perpetrators of mass murder.

Identifying the problem of mass killings merely as one of “mental illness” is problematic because it can reinforce faulty characterizations of mass murderers and their motivating personal predicaments. An example of one such faulty characterization comes from internet writer and satirist Maddox:

100% of gun massacres occur by people with mental illness. If you disagree with that statement, be prepared to make the case that there are some rational, cool-headed people who, after thinking clearly and weighing the pros and cons, decide to commit mass killings. There aren’t…Trying to rationalize an irrational act is futile. Rational people don’t go on shooting rampages.

This is wrong. Rational murder is not an oxymoron. And no, I am not making an argument about homicides justified in self-defense. Morally just is not synonymous with rational, nor is rational synonymous with empathetically sensible. Murder implies immoral killing, but it does not entail irrational killing. Murder is not inherently irrational.

Rational behavior refers to action which is consistent with reasonable expectations of desired outcomes. For example, if an actor has goal A, and sound reasoning suggests that action X is the best means of pursuing goal A, then taking action X implies that the actor is behaving rationally. If, however, the actor takes action Y (for which there is not good reason to believe that it will result in goal A, at least not nearly as well as action X), then the actor can be said to behave irrationally.

Whether an action can be constituted as rational or irrational depends on how well it accords with desired goals, given reasonable expectations about what results the action and its alternatives will produce. Therefore, it cannot be argued that murder is by definition irrational, for it may very well be the case that the outcome of the murder(s) can be reasonably expected to satisfy the desired ends of the murderer. The rationality of the act is irrespective of our judgment of its moral sensibility.

The hasty mischaracterization of all multiple victim public homicides as having been perpetrated irrationally is related to the common perception that the perpetrators suffer from mental illness which clouds their better judgment. We, the mentally healthy ones, would like to believe so. We find comfort in the notion that “rational, cool-headed people” cannot possibly rationalize the slaughter of other innocent human beings.

But we would be wrong. For one, various governments throughout recent history have murdered many millions of human beings. Can they be said to have behaved irrationally? Quite the contrary—it was all too often the result of rationally devised actions taken to promote such (immoral) objectives as ethnic cleansing and exterminating political dissidents. And despite our popular caricature of them as wailing fanatics who have lost their sense of reason, terrorists are known to rationally target civilians with particular sociopolitical objectives in mind, which they reason their acts of terror to be the optimal means for achieving.

Those who commit multiple victim public homicides are not so different. It is not that they have lost their sense of rationality, so much as it is that they have lost their sense of empathy. Innate respect for human life does not originate from rationality itself, but rather from empathetic concern for the feelings and status of other human beings (from which moral rights and responsibilities may be rationally deduced). One can find rational reason to commit violence against others if he lacks this empathy, and perhaps harbors a disregard or disdain for human life. In this recent case, it was his personal offense at being shunned by attractive women which his killing spree was intended to avenge.

Some murders may be irrationally committed. Occasionally, it may be that a person loses his temper and whimsically kills another, in spite of the fact that more deliberate contemplation would have instructed him not to do so. But it is difficult to postulate such irrationality for many of these high-profile spree killers, including this recent one, who reasoned and deliberated their murders thoroughly and well in advance, ostensibly in accordance with their own desired ends.

This is not to say that the killers were always rational. For example, this recent one sought after the high financial rewards of the lottery as a plausible salvation from his womanless existence. Unless he were to find gambling inherently valuable, or he had a fetish for minutely probable but high-reward risk taking, his lottery purchases were, in all reasonability, irrational efforts toward his broader pursuit of females (he unsoundly believed that he had a good chance of winning). However, this does little to absolve the apparent rationality of his “retribution,” which seemed very much in keeping with his own professed goal of revenge.

Mental illness is troublesome, but not in the way that most discussions surrounding mass shootings would portend. Research shows that not only are most of the mentally ill nonviolent, but the vast majority of those who commit violence do not suffer from mental illness. And for what it’s worth, those who are mentally ill are more likely to be the victims of violence than the perpetrators. The inaccurate portrayal of the mentally ill in our media as predominantly violent has fueled great misconceptions about the nature of mental illness and violence.

The problem is quite different from mental illness itself being violent, for it largely is not. Indeed, it is telling that both the Sandy Hook and Isla Vista killers were (purportedly) afflicted with Asperger’s Syndrome. Insofar as this is not merely a coincidence, the problem is not that those with Asperger’s are prone to violence, but rather that they suffer disproportionately more social disabilities, which in rare cases can lead to extreme instances of violence.

In other words, the problem with the killer in Isla Vista is not that his mental condition prompted his loss of rationality, but that his narcissistic pursuits were impeded by his social disabilities, which in turn prompted his anger and resentment of others. And as we should know, anger is essentially the primary instigator of interpersonal violence, mental illness or not.

Conventional presumption of mental illness problems in the wake of multiple victim public homicides does more to obfuscate than to enlighten us about the nature of the perpetrators and their motivations. There is a distinction to be drawn between rationality and empathy, and assumptions of mental illness have too often served the impression that the former is what gets corrupted, or that the two are inseparable when they are not. In reality, the problem of most mass murderers is not one of irrationality, but rather the corruption of empathy and the consequent moral rights and responsibilities. However much we, as empathetic beings, rightfully regard murder as vile, detestable, and utterly immoral, we should make no mistake that it can very well be a rational act.