Tag Archives: libertarian

Financial Acceleration and the Paradox of Flexibility

Macroeconomic models are rigorous, but simple. While their simplicity may lend easier interpretation to otherwise opaque economic phenomena, they often fail to capture key microfoundational realities of the economy which render it different from what the models would ascertain. And for some time, one of the features most crucially lacking from these models was any incorporation of the real-world frictions posed by financial intermediation.

Toward this end, a “financial accelerator” mechanism was adapted into the canon of modern macroeconomics, most notably by economists Ben Bernanke, Mark Gertler, and Simon Gilchrist in 1996. The financial accelerator is meant to make sense out of the recurrent observation that relatively large shocks to the economy seem to originate from proportionally smaller perturbations in the financial markets—hence the need for some sort of acceleration process which magnifies financial fluctuations into broader macroeconomic volatility.

The proposed financial accelerator model works like this: First, there is a change in asset values. This change affects borrowers’ cost of external finance, because in a world of asymmetric information, lenders are risking their money to default if loans are not fully collateralized, so a risk premium must be charged to the extent that collateral is not available. So when there is a decline in the value of assets available for collateral, it raises the cost of external financing, which depresses spending in the economy, which imparts further decline in asset values, and so on in an adverse spiral.

This concept, although not formalized until fairly recently, was not entirely unfathomed by earlier thinkers. In 1933, economist Irving Fisher articulated the related concept of “debt deflation” in order to explain the severity of the Great Depression. His idea was that in a state of over-indebtedness, distress selling leads to falling asset prices, which up against nominally-fixed debt obligations will cause declining net worth, leading to a contraction of spending as borrowing is constrained and saving is attempted to repair balance sheets, and profits fall given that price deflation presses against debt service costs, resulting in a precipitous decline in output in a vicious cycle that only makes itself worse until enough debtors default that the economy can start to recover.

To be clear, the argument is not that any increased (decreased) propensity to save (borrow) is inherently recessionary or prone to trigger such a spiral. Interest rates are the key price mechanism that can adjust up or down to equilibrate saving and borrowing; it is when interest rates are not allowed to fall enough, or they are at the zero lower bound (ZLB) and can go no lower, that an indebted, deflationary spiral can produce a recession in output. And it is this ZLB—or otherwise binding floor on interest rates—situation which has inspired a more recent assertion about the nature of aggregate demand in such a predicament.

Recall that in a New Keynesian economic framework, the aggregate demand (AD) curve represents the quantity of real output demanded at each price level. Likewise, the aggregate supply (AS) curve represents the quantity of real output supplied at each price level.


Unlike micro-level demand curves, which slope downward mainly due to micro-level substitution effects (but also income effects), the AD curve is conventionally downward sloping for three reasons: (1) the real balances effect, which is the wealth effect in cash—i.e. people spend more when they have more, as well as the inverse, and the price level determines the real value of this cash; (2) the interest rate effect, which is that a higher price level reduces the real value of savings, which raises interest rates and depresses spending (less borrowing, more saving), or the inverse; (3) the exchange rate effect, which is how a higher price level makes domestic goods more expensive and foreign goods cheaper, thus reducing exports and raising imports to decrease GDP (or the inverse).

In reality, it is the second effect—the interest rate effect—which is the most quantitatively significant for an economy like the US. And it is when interest rates get stuck at zero that New Keynesian economists Paul Krugman and Gauti Eggertsson argue that the AD curve becomes upward sloping due to the practical inability of nominal interest rates to be lowered when they are at zero, and the increasing burden of nominally-fixed debt as price deflation ensues. This is most likely the case, Krugman argues, given that the real balances effect on spending (from increased value of cash via deflation) is not very big compared to the effect of real interest rates from deflation at the ZLB.


Note that this upward sloping AD curve would not apply when recovery inflation is expected to compensate for the below-trend deflation, for that would have the effect of lowering expected real interest rates. But when lower inflation or deflation manifests as a permanent reduction in the price level, interest rates cannot go low enough to circumvent the debt deflation problem. Since the high real interest rates (i.e. lower prices in the future) encourage creditors to save the payments they receive, a spiral of self-fulfilling deflationary hoarding is the result, leading to a deep and prolonged recessionary slump.

This is the thrust of Krugman and Eggertsson’s 2010 paper titled “Debt, Deleveraging, and the Liquidity Trap.” The putatively backward-bending AD curve in the floor-bound interest rate environment leads to some counterintuitive conclusions about supply-side phenomena. This includes the “paradox of flexibility,” which suggests that removing downward nominal rigidity would actually make the recession worse because it would raise real interest rates and the real value of debt. (You thought Keynesians were all about sticky wages? Not so fast!) It also includes the “paradox of toil,” which suggests that an increased propensity to work (represented by an outward shift of AS) would actually lead to less employment for basically the same reason.

Broadly interpreted, the unconventional, upward sloping AD curve implies that positive supply shocks reduce output and negative supply shocks raise output. Both of these paradoxes are supplements to the already well-known “paradox of thrift,” which suggests that attempts to save in a ZLB environment (or alternatively, a “liquidity trap“) depress income and actually lead to less saving in the aggregate.

Krugman and Eggertsson’s model has led them to suggest that wage cuts, oil shocks, and the like may not have the effect that they are usually believed to have in liquidity trap conditions. This is a modern formulation—one that Keynes did not apparently believe when he insisted that FDR’s fixing of prices and wages under NIRA “probably impedes Recovery.” (However, Keynes had also argued that aggregate wage cuts would not help.) The backward AD curve would imply that adverse restrictions on supply—downward nominal wage rigidity, higher commodity prices, etc.—are abnormally promotional of output under these special economic circumstances.

So, are Krugman and Eggertsson right? Is aggregate demand upward sloping under debt deflation and floor-bound interest rates, with all its implications for the supply side? I am doubtful. This may be one of those cases, as in the 1970s with inflation expectations, in which a Keynesian model comes up short for an inadequate account of microfoundations. Several factors come to mind that contravene their conclusion:

1) Wage cuts could serve as an alternative to unemployment. Obviously this could do more to preserve output on the micro level, but on the macro level? Assuming firms’ labor demand schedules are sufficiently elastic, it could also help preserve output on the macro level insofar as it improves total labor income through less unemployment, which would promote consumer spending.

2) Debt is burdensome in proportion to income, not prices. And if wage flexibility promotes income, it would effectively ease the aggregate debt burden. Sure, there may be fewer defaults to discharge debt with shared cuts instead of acute unemployment, but to the extent that households are risk-averse, they would feel less pressure to deleverage, and lenders would have less need to assign a higher risk premium associated with potential default if wage cuts mitigate unemployment.

(In addition to high oil prices and interest rates, an unemployment shock could help explain that “Minsky moment” which effectively lowered the ceiling on acceptable leverage.)

3) Krugman and Eggertsson do not effectively model investment or its returns. Wage flexibility could raise the marginal efficiency of capital, to the extent that a higher payroll keeps capital producing output, and wage flexibility improves employers’ (~capital owners’) profit margins on output. Because asset prices are derived from discounted returns to capital, this could mitigate asset price deflation and actually dampen the financial accelerator mechanism.

Of course, it’s possible that the effect of higher real interest rates from deflation amidst debt overhang could be greater than the effects I suggest. It could be the case that firm-level labor demand is not elastic enough in partial equilibrium, that household balance sheet behavior is not that risk-averse, that profit margins are not much improved by flexibility, and higher real interest rates are the bigger factor. But the a priori reasoning here only suggests ambiguity, and the positive economic reality can only be ascertained through empirical evidence. We do not know that AD is upward sloping just because interest rates are stuck, real balance effects are small, and debt burdens abound.

What would answer the question is evidence on the impact of supply-side phenomena during periods when interest rates are floor-bound (i.e. supply shocks in a liquidity trap). To that end, here is the historical data showing inverted real wages and output during the Great Depression:


As Scott Sumner explains, a huge part of the variation in output during the Great Depression is explained by real wages. When wages were hiked, economic growth stalled—especially in 1937-38 after the Wagner Act had granted significant power for unions to coerce wage hikes. Assuming the Great Depression qualifies as having liquidity trap conditions that could alter the slope of AD (which some may dispute, but Krugman would think so), this is powerful evidence against the view that negative supply shocks are expansionary, and so maybe AD does not readily become upward sloping after all.

So until Krugman, Eggertsson, or anyone can provide a survey of historical evidence to suggest that their upward sloping AD model comes true in practice, it should be safe to assume it does not. And in the meantime, positive supply shocks and wage/price flexibility should be welcomed. It’s true that downward rigidity is, to a great extent, an inexorable reality of the market; moreover, price stickiness in moderation may have some secular benefits. Notwithstanding that, price flexibility is a lubricant necessary to make markets function well, and public policy should not adversely encourage more rigidity through suboptimal safety net and minimum wage policies, and the like.

But one thing is made clearer by this framework: While debt deflation may not invert the AD curve, it plausibly steepens it, and thus flexibility in wages is somewhat less helpful than it otherwise would be. (It follows from Bayesian reasoning.) This raises the relative importance of having a demand-side price level reversion rule in monetary policy. I prefer nominal GDP level targeting, for the way it responds to supply shocks compared to pure price level targeting. (Unlike Krugman, I think it’s ultimately about income rather than inflation, as this post should have made clear.)

Indeed, nominal reversion after a recessionary shock may be the single most important reform that could prevent severe demand-side recessions from ever taking place again. Regardless, we should still seek to enable more flexibility in these situations, because history suggests it would serve to attenuate economic shocks in the most unparadoxical way.

Epistemology of the Iraq War

An interesting development in the Republican presidential field over the past week or so has been the hasty emergence of a consensus, following Jeb Bush’s interview fumble with Megyn Kelly, that the invasion of Iraq was retrospectively a mistake, having been predicated on faulty intelligence.

For Iraq War cynics with an interest in the Republican Party, this is a positive development. But it is also a rather superficial one, in that it does not go so far as to reassess the decision rules which are used to implement foreign policy in a world of limited information.

Consider the faulty intelligence. With respect to the 2003 invasion, it is not enough to note that the intelligence failed regarding WMDs, for the possibility of intelligence failure is something which must be considered a priori. On the one hand, intelligence which suggests the existence of an active WMD program in Iraq could be a false positive; alternatively, intelligence suggesting there is no ongoing WMD effort may be a false negative. This potential for intelligence failure attenuates the correspondence of intelligence conclusions with real-world truth, and so any concomitant foreign policy decision needs to be humbled by this.

Similar considerations follow from other factors. For one, the Iraq War has testified to the reality that nation-building is laden with pitfalls, especially in such a volatile and fractious region of the world. Another is the potential for the incoherence of future foreign policy decisions to diminish the benefits of the invasion. Examples arguably include (1) the Bush Administration’s 2003 refusal of negotiations with Iran, (2) NATO’s 2011 air campaign against the Libyan regime which had relinquished its WMD program months after the Iraq invasion, and (3) the Obama Administration’s failure to extend a Status of Forces Agreement, unlike what the Bush Administration had anticipated.

Considering all the ways in which the hypothesized net benefits of an invasion and occupation of this sort are easily depreciated, the real question that ought to be asked is not what Megyn Kelly asked Jeb Bush about “what we know now.” It’s about the decision rules that, in hindsight, ought to have been applied given what was believed then, because in a world of limited information, we will likely not know everything that we will come to know and will prefer to have known before.

Instead of justifying the invasion because “the intelligence says active WMD program,” the possibility of erroneous intelligence deserved stronger consideration then.

Instead of justifying the invasion because “it will promote non-proliferation of WMDs,” the possibility of this benefit being stymied by future foreign policy decisions deserved stronger consideration then.

Instead of justifying the invasion because “we can maintain a troop presence for long enough,” the possibility of troops being withdrawn out of political fatigue deserved stronger consideration then.

Now, it’s possible that these objections were considered at the time, and they ultimately did not weigh enough against the factors which motivated the Bush Administration to wage war in Iraq. And indeed, it is in part the experience of the past twelve years that has persuaded me to conclude with as much confidence as I have that these merited more consideration then. (Of course, let’s not forget there were those who had enough foresight to have disapproved of the war before it unfolded.)

The crucial distinction here is between particulars and priors. It may seem subtle, but it’s important. You see, it is not enough to say that in reflection, the particular prevailing beliefs in 2002 were in error. Those who steer foreign policy need to go beyond that to updating their priors, such that the same particulars being confronted today would be processed through new criteria to meet an updated decision rule.

Ultimately, it is the experience of the Iraq War which suggests that the decision made in 2002 was quite arguably inappropriate, not just based on today’s knowledge, but based on the information promulgated back in 2002. It’s not that the intelligence just so happened to be wrong; it’s that the intelligence had the potential for error, nation-building in the sectarian Middle East is an inherently arduous exercise, and so on. Learning from this experience compels reasonable people to conclude that, through the lens of our updated prior beliefs of the world and revised decision rules for making war, a different decision today based solely on information resembling what was believed then could very well be preferable.

And it is this consequential point which most Republican candidates are failing to address.


The Consequential Saving-Investment Gap

Now that he’s retired from his eight-year stint as chairman of the Federal Reserve, Ben Bernanke is working as a “Distinguished Fellow” at the Brookings Institution, and has recently started his own blog on their website. Regardless of what you may think about monetary policy under his tenure as Fed chairman (I know I have my reservations), his recent post, “Why are interest rates so low?” does clear up some common misconceptions about interest rates and the interpretation thereof relevant to the stance of monetary policy. Here’s Bernanke:

If you asked the person in the street, “Why are interest rates so low?”, he or she would likely answer that the Fed is keeping them low. That’s true only in a very narrow sense. The Fed does…set the benchmark nominal short-term interest rate. The Fed’s policies are also the primary determinant of inflation…and inflation trends affect interest rates…The Fed’s ability to affect real [inflation-adjusted] rates of return, especially longer-term real rates, is transitory and limited. Except in the short run, real interest rates are determined by a wide range of economic factors, including prospects for economic growth—not by the Fed.

To understand why this is so, it helps to introduce the concept of the equilibrium [Wicksellian] real interest rate…the real interest rate consistent with full employment of labor and capital resources, perhaps after some period of adjustment. Many factors affect the equilibrium rate, which can and does change over time. In a rapidly growing, dynamic economy, we would expect the equilibrium interest rate to be high, all else equal, reflecting the high prospective return on capital investments. In a slowly growing or recessionary economy, the equilibrium real rate is likely to be low, since investment opportunities are limited and relatively unprofitable. Government spending and taxation policies also affect the equilibrium real rate…because government borrowing diverts savings away from private investment.

If the Fed wants to see full employment of capital and labor resources…its task amounts to…push[ing] those rates toward levels consistent with…its best estimate of the equilibrium rate…If the Fed were to try to keep market rates persistently too high, relative to the equilibrium rate, the economy would slow (perhaps falling into recession), because capital investments (and other long-lived purchases, like consumer durables) are unattractive when the cost of borrowing set by the Fed exceeds the potential return on those investments. Similarly, if the Fed were to push market rates too low…the economy would eventually overheat, leading to inflation…The bottom line is that the state of the economy…ultimately determines the real rate of return attainable by savers and investors. The Fed influences market rates but not in an unconstrained way…

This sounds very textbook-y, but failure to understand this point has led to some confused critiques of Fed policy.

Exactly right. When we casually examine where the Fed has set interest rates over time, it’s important to recognize that the Fed does not set these rates in a vacuum. The optimal interest rate is not one that necessarily remains steady over time, but can and does fluctuate based on economic conditions. In a weaker economy, lower investment demand and an elevated propensity to save can naturally be expected to lower the equilibrium interest rate. But because the Fed targets a key interest rate—the federal funds rate, an overnight interbank lending rate—to mediate its provision of the monetary base, it must make deliberate adjustments to its own interest rate target. Given that the Fed has control over the monetary base, it must influence short-term interest rates one way or another. It does not make much conceptual sense for an extant Fed to “do nothing” when, by its very existence, it is doing something to the monetary base (and, by proximity, interest rates) even if that means not changing it at all. (It wouldn’t make sense for a state-run oil monopoly with market power to “do nothing” with respect to oil prices in its market, would it?)

This means that as the Fed targets a lower and lower interest rate during a recession, it is not necessarily the case that the Fed is pushing interest rates too low. Interest rates could be too low, too high, or just right, but we should expect them to fall in that situation. (And vice versa for raising interest rates in a recovering economy.) Now, the fact that the Fed has to make certain adjustments may suggest that it has previously erred with respect to its interest rate decisions and is playing catch-up (for instance, by leaving interest rates too low and then hiking them swiftly to belatedly fight off inflation). But the bottom line is that just because the nominal interest rate is historically low, or even zero, does not necessarily mean that the real interest rate is too low relative to the equilibrium/natural/Wicksellian interest rate.

In fact, estimations of this rate have suggested that during recent years, interest rates should have gone even lower, despite being at zero. But how could it be so low? In one interpretation, the natural rate of interest is that which equilibrates real savings, or underconsumption, with real investment. And consider that as a result of the recession, net private savings surged while net (domestic) investment fell, prompting the considerable saving-investment gap seen on the right:


This notion, loosely illustrated above, has important implications for Fed policy. The tendency of people (in reality, mostly firms) to increase money balances by sequestering more income as uninvested savings will, left unchecked, contract spending (and, by identity, income), which will in turn contract real output given the nominal rigidities which are undoubtedly present in the economy. And given the elevated propensity to build money reserves amidst the uncertainty in the wake of the financial crisis and a struggling economy, the Fed should accommodate such balance sheet deleveraging by expanding the monetary base.

And it has, dramatically. But at the zero lower bound, there is difficulty in making these efforts fully efficacious because nominal rates cannot go lower to deter excessive saving and encourage investment. That is why some argue that price inflation should have gone higher during the recovery to push real interest rates lower and support this process–of course, until the stronger economy raises the equilibrium rate back up again. (If that sounds too artificial, just interpret it as compensating for the lower inflation that was experienced due to the recession.)

Because ultimately, it is this saving-investment relation which is the harbinger of most economic fluctuations. Just consider its close correspondence to the gap between realized and potential GDP:


Now, contingent on how exactly the CBO estimates potential GDP, I should caution that its calculation could possibly produce some dependency by construction (I’m not familiar enough to be sure). Notwithstanding that, if there was ever any doubt that business cycles are largely a monetary phenomenon, this should clear it up.

In my view, Bernanke’s interpretation of current low interest rates likely gives too much weight to a long-term trend of excess global savings, as his follow-up post suggests. His chart of interest rates on 10-year Treasury bonds juxtaposed with CPI inflation omits the reduction in the risk of inflation, etc. which rise with duration, i.e. the “term premium” (more on that here). (Update: Bernanke added a new post on the term premium.) But with respect to the recent situation (which is improving–the economy made solid gains in 2014), the point saliently stands. Improving our understanding of just how important the saving-investment relationship is could surely serve, through the opinions of those who circumscribe monetary policy, as a benefactor of greater economic stability.

Some Critical Synopsizing of Piketty

Here’s a new YouTube video presentation I put together, mostly taking a critical view of Thomas Piketty‘s Capital in the 21st Century. I include a list of helpful reviews of that book in the description of the video. HT to Larry Summers, Matt Rognlie, and Alan Reynolds for providing what I found to be, taken together, the most panoramic and consequential triad of commentary on the subject.

Gender Is Not Just a Social Construct

Anyone who has ever grazed the subject of sociology is surely familiar with the claim that “gender is a social construct.” That is, those contrasts we contemporarily observe between the social situations and behaviors of men and women are not predicated by innate differences in biological hardwiring, but are instead relics of longstanding social norms and institutions that are not intrinsic to our nature (or so it is claimed). While there is invariably a role for nurture to play in all social circumstances, including gender differences, it seems to me that the strong view of “gender as [almost entirely] a social construct” is naïve to the rudimentary insights of evolutionary biology.

Consider Bateman’s principle. This is the well-established biological observation that sexually reproductive species exhibit different variances in reproduction rates within each sex, resulting from the necessity of prenatal development of offspring in one sex and not the other. In most species, reproductive success (RS) among males varies more greatly than it does among females, consequent of the female’s role in carrying pregnancies. Whereas a female’s reproduction is bounded by physiological constraints on the number of her gestations, a male’s reproduction is bounded in no such way, and is only limited by as many females as he can impregnate.

Naturally, this leads to more competition among the males of a species for the limited reproductive capacities of their female counterparts, seeing as a successful male can potentially father offspring with many more females than a female can with different males. As a result of the differences in RS variance between the sexes, this would lead evolution to naturally select on them in dimorphic ways. For instance, while females are comparatively rewarded for caring for their limited number of offspring, males are comparatively rewarded for aggressive and promiscuous behavior which gets them reproductive access to a larger number of females.

What these dimorphic pressures entail is that the males of a species can be predicted to surpass females in the expression of those attributes which are more intensively selected upon by higher male RS variance. Conversely, females can be expected to outperform males on those traits for which females are comparatively more selected upon. And in reality, this is what we observe. Consider:


  • Women perform cognitively better in measures related to social intelligence
  • Women are frequently targeted by microfinance initiatives in developing countries because they are more reliable in spending resources on their children than men

Many of these characteristics epitomize gender stereotypes with which we are all too familiar. While such differences are not self-proving to be of a natural origin, the fact that these are so consistent with what could be predicted from dimorphic selection pressures resultant of sex differences in RS variance is unlikely to be a mere coincidence.

In fact, a relevant implication could be inferred from this sexual dimorphism vis-à-vis social hierarchy. For one, women are inclined to seek quality men who are better situated to provide for their offspring; what’s more, men are inclined to situate themselves where they can more readily acquire reproductive access to women. These both suggest that men may have a comparatively greater incentive to pursue positions of high social rank than women do. In other words, Bateman’s principle could suggest that males will disproportionately occupy the upper echelons of social hierarchies.

To be sure, this is not always the case. There is plenty of nuance to be had in how Bateman’s principle is observed in practice. Nonetheless, the principle itself of divergent RS variances remains as a very consequential observation in biology. While we should not rule out that male preponderance in leadership roles could have some antecedents that are socially constructed, it is of interest to recognize that such male preponderance is not inconsistent with a natural explanation deduced from basic evolutionary biology. Actually, it should be stressed that natural versus social constructs ought not be dealt with as if they are bifurcated processes acting independently of one another. Natural preconditions undoubtedly play an important role in creating complementary social norms and institutions! More subtly, these social constructs may even exert selective pressures on the population over time.

So what should our takeaway be from all of this? When it comes to drawing political or cultural implications from what we conclude about the originators of gender differences, we must distinguish between the positive (what is) and the normative (what ought to be). If certain gender differences have a significant natural component, and if this has precipitated a preponderance of men in positions of power, does this mean that men should exclusively dominate in these roles? Of course not!

Nature is nothing to idolize. Evolution has shaped us not with concern for our happiness or suffering, but for our fitness for survival and reproduction. What is remarkable about our situation as an intelligent species is that we can reject our nature, using social constructs to realign our own behaviors and environment to fit our utilitarian interests instead of our evolutionary directives.

The problem with the incessant faulting of social constructs for most social problems is that it sometimes purports that humanity’s natural state absent these constructs is a purer form of good. I am more inclined to believe something of the opposite sort: that a human in the state of nature is perhaps not evil, but interested in oneself and those of kin to the point where others’ interests may be violently disposed of in expediency.

Thank goodness for social constructs which have been working to eradicate this cruel disrespect for the interests of others. Thank goodness for social constructs which have been working to curtail the incidence of such evils as violence and rape. Thank goodness that agrarian economies are finally going by the wayside, so that male economic dominion from superior physical strength can give way to more gender egalitarianism (among other blessings of industrialization). And thank goodness for the social construct of capitalism, which has rerouted self-interests to become mutual, and in doing so has liberated billions of human beings from their natural state of poverty.

Above all, thank goodness for social constructs which suppress the worse parts of our nature which are at odds with an enlightened understanding of what is morally decent and utilitarian. When it comes to something like gender disparities and the lament thereof, let us not perceive them merely as malicious social constructs which must be banished in the interest of an otherwise sensible and egalitarian nature; rather, let us recognize them more as the manifestations of our natural predispositions, the ill parts of which only well-devised social constructs can properly redress.

Progressives Beware: Don’t Make These Two Arguments Simultaneously

Recently making the rounds in the minimum wage debate is a working paper released late last December from UMass Amherst economist Arin Dube. With his new research, he argues that, in fact, the minimum wage really does reduce poverty, despite most previous studies having demonstrated no significant effect of minimum wage increases towards reducing poverty. According to Dube’s preferred methodology, for every 10 percent increase in the minimum wage, overall poverty is reduced by 2.4 percent (of the total who are in poverty, not percentage points in the poverty rate).

In my opinion, Dube’s two studies regarding the employment effects of the minimum wage have research design flaws, as were pointed out by Neumark, Salas, and Wascher in their review of said studies. Given this, I can’t help but be initially skeptical of Dube’s most recent (unpublished) paper about the antipoverty effects of the minimum wage.

As I’ve explained before, the minimum wage does not have the significant antipoverty effect that most minimum wage supporters usually assume it to have. This is due to several offsetting factors (mildly higher food prices, negative employment effects), as well as reasons that make the minimum wage poorly targeted in general (most impoverished adults don’t work; most minimum wage workers are not in poverty).

But given these factors, the results of a minimum wage increase on poverty do not need to be zero. Theoretically, the poverty effects are ambiguous. So Dube could be right that on the margin (and in the short run), the antipoverty effects of minimum wage increases tend to outweigh the propoverty effects.

However, there’s an important caveat. As Dube acknowledges in his concluding remarks:

There are a number of outstanding issues that I did not address in this paper. The first set of issues concerns the definition of family income used in this analysis. Following official poverty calculations, my family income definition includes both pre-tax earnings and cash transfers…the estimates here do not capture the impact of minimum wages on non-cash transfers such as food stamps or housing, or on the receipt of tax credits such as EITC.

As I pointed out in my recent video, one of the considerations with a minimum wage hike intended to reduce poverty is that a good portion of the wage increases will be offset by losses in government benefits. Often times, low-wage workers in poverty will be enrolled in one or more government welfare programs, as James Sherk detailed in his congressional testimony on the minimum wage. Because of this, they face high implicit marginal tax rates which offset a significant amount of the wage gains. As Unbiased America illustrates in this post, the effect is by no means negligible.

Why is this important? Well, it has to do with the fact that progressives like to make another argument with regards to the federal War on Poverty. As Joe Biden’s former chief economist Jared Bernstein writes:

The official [poverty] measure stands at 15 percent, but it is widely regarded to [be] woefully inadequate, as it depends on outdated income thresholds and omits both much of the impact of policies intended to fight poverty and income sources of low-income households.

Like many other progressives as of late, Bernstein argues that despite the stagnation in the official poverty rate, the federal War on Poverty has actually been successful when looking at the supplemental poverty rate, which takes into account various government transfers and in-kind benefits, most of which are part and parcel of the War on Poverty itself.

Those of you reading closely will recognize a contradiction between these two positions. On the one hand, some progressives argue that the minimum wage reduces poverty, vis-à-vis Dube’s paper. But many progressives also argue that the War on Poverty really has had some success, because post-tax-and-transfer poverty has declined.

Yet by his own admission, Dube’s widely cited minimum wage research only accounts for the official poverty rate, not the supplemental poverty rate. To be fair, this is the case with other minimum wage poverty studies as well. Nevertheless, Dube’s paper is befallen by the very shortcoming that many progressives go out of their way to avoid with respect to the War on Poverty.

Because of this, one thing ought to be clear. Progressives can argue that the minimum wage reduces poverty, but that the War on Poverty has failed to reduce it. Alternatively, progressives can argue that the War on Poverty did indeed reduce poverty, but that the minimum wage has not been shown to do the same. To be sure, one could find reason to doubt either of these claims. But if anything is certain, it’s that progressives cannot make them both simultaneously.

End term limits? How about a one-term limit?

Writing in the Washington Post, Jonathan Zimmerman endorses the idea of ending presidential term limits. Great idea? Hardly.

Zimmerman’s argument is odd on a few counts. For instance, he suggests that a re-electable president is less likely to be criticized from within his own party, something Zimmerman oddly seems to view as a good thing. Because of course, we should encourage more criticism to be muffled along partisan lines. What a novel idea!

But the crux of Zimmerman’s argument to abolish presidential term limits must be answered in this way:

Does America really need a chief executive with lots of experience in garnishing power for himself? Should the executive branch ever be entrusted year-after-year to the same eminent personality that Americans have developed a convergent affection for? No; it should be just the opposite. A free republican government needs an executive who is less experienced, less relevant, less loved, and less ingrained in the establishment. The president’s reign of power needs to be short, so that he personally stands to benefit less from usurping authority into the executive branch of government.

But my argument goes beyond the defensive. Zimmerman argues that the president should be allowed to continually face the possibility of re-election, so that he always has to answer to the wishes of “the people.” As a libertarian who is cynical of democracy, I would prefer to see more of the contrary. In fact, I would argue that we should ideally have a six-year, one-term presidency. Consider this:

  • We would likely get more honesty from presidents, and less pandering to voter blocs and special interests while in office.
  • Sitting presidents would never have to waste their time campaigning when they should be making governing decisions.
  • There would be no awkward divergence in the credibility of a first-term president vs. a lame duck president.
  • We would be less likely to get expansionary monetary/fiscal policy for the sake of presidential re-election.

(The last point is something that arguably took place under Clinton and Bush II, and the absence of which may have led to the defeat of Bush I. Update: Here is evidence to confirm that Nixon explicitly did do this in 1972.)

This idea is nothing new. In fact, it was proposed and rejected at the Constitutional Convention in 1787. Many fear that a one-term presidency would make any incumbent president totally unaccountable to the people, leaving the president more able to impose a number of undesirable policies. But with the irrational voters that we have, and two-term presidents who have gotten away with so many bad policies in their first terms (e.g. PATRIOT Actindefinite detention), that ship appears to have already sailed.

If anything, I believe that a one-term presidency might pressure voters to give more scrutiny to presidents before they are elected. Furthermore, fewer presidential elections and no presidential re-elections might lead them to pay comparatively more attention to Congress, and possibly make them more likely to pressure Congress to use impeachment. Insofar as a president being unanswerable to re-election is problematic, it can hardly be worse than what we already get with every two-term president’s second term.

In a 1986 New York Times op-ed against the six-year, one-term presidency, historian Arthur Schlesinger Jr. wrote, “it is profoundly anti-democratic in its implications…It assumes that the democratic process is the obstacle to wise decisions.”

Yes, it is anti-democratic, and that’s sort of the point. Democracy is a mechanism for empowering low-information swing voters and the political agents that are most effective at pandering to their rash misconceptions about public policy and social science. That is not to say that no democratic voting processes should exist in a government; but at the very least, the excesses should be curbed by indirect methods of appointment and constitutional limitations.

We really shouldn’t trust “the people” (whoever they are) with the power to re-elect presidents, especially not for more than a second term. Many Americans’ adoration of political personalities on the basis of brand-name familiarity (see: Hillary Clinton) suggests that they shouldn’t be trusted with the power to re-elect ad infinitum; moreover, their scrutiny of presidents and their actions will be better when they aren’t habitually confining themselves to the same familiar name for terms on end, like they have for many members of Congress.

As Friedrich Hayek once wrote in The Constitution of Liberty (1960):

“Perhaps the fact that we have seen millions voting themselves into complete dependence on a tyrant has made our generation understand that to choose one’s government is not necessarily to secure freedom.”

P.S. We need term limits for Congress, too.

fdr term limits