The case for a higher inflation target gets stronger

For several years, economists and policymakers have been debating the wisdom of raising the inflation target. Today, roughly two-thirds of global GDP is produced in countries that are either de jure or de facto inflation targeters (see our earlier post). In most advanced economies, the target is (close to) 2 percent. Is 2 percent enough?

Advocates of raising the target believe that central banks need greater headroom to use conventional interest rate policy in battling business cycle downturns. More specifically, the case for a higher target is based on a desire to reduce the frequency and duration of zero-policy-rate episodes, avoiding the now well-known problems with unconventional policies (including balance sheet expansions that may prove difficult to reverse) and the limited scope for reducing policy rates below zero.

We have been reticent to endorse a higher inflation target. In our view, the most important counterargument is the enormous investment that central banks have made in making the 2-percent inflation target credible. Over the past quarter century, the Federal Reserve has been extraordinarily successful at securing price stability, even in the face of unprecedented economic and financial shocks. Raising the target would likely prove both difficult and costly, temporarily destabilizing long-term inflation expectations that anchor many day-to-day decisions, at the same time that it would require obtaining durable political support.

Yet, several lines of empirical research recently have combined to boost the case for raising the target, say to 3 percent. Four developments are particularly noteworthy: (1) the estimated decline in the long-run equilibrium, or natural, real rate of interest (r*); (2) sharply higher estimates of the likely frequency and duration of zero-policy rate episodes; (3) evidence countering a theoretical link between the level of inflation and the extent of inefficient price dispersion; and (4) calculations suggesting that conventional price measures overstate increases in the true cost of living by more than previously thought.

To see how these results make the case for raising the inflation target, we will review each of them in turn.

Starting with the natural rate of interest (r*), there is now a clear consensus that the level has declined substantially. According to the FOMC’s Summary of Economic Projections, the median estimate of r* (calculated by subtracting the 2 percent inflation target from the published “longer run” projections for the federal funds rate) has dropped by 1.25 percent over the past five years to just 1 percent. Some estimates put r* even lower (the end-2016 measure of Holston, Laubach and Williams estimates it at 0.36 percent). While these point estimates are quite imprecise, the evidence of a sustained decline is considerable (see also here).

Using an r* of 1 percent in a Taylor rule, rather than the original 2 percent assumption, sharply increases the frequency that the U.S. policy rule rate would hit zero. With a 2 percent inflation target, this means that the longer-run nominal federal funds rate should be 3 percent, not 4 percent.  The immediate implication is that interest rates can only fall about 3 percent points rather than 4 before policymakers hit a policy rate of zero (close to the effective lower bound). To see how much difference this makes, consider a modified Taylor rule in which policy rates respond one-for-one to the unemployment gap (consistent with the original Taylor rule weight of ½ on the output gap). With inflation starting at the 2 percent target, when the actual unemployment rate exceeds the natural rate of unemployment by more than 3 percentage points in such a setup, the policy interest rate will hit zero.

Unemployment gap, 1960-2016

Source:  FRED .

Source: FRED.

The red line in the chart above plots the unemployment gap in the United States. As we can see, in the period since 1990, when inflation has averaged close to the Fed’s 2 percent target, this gap (measured using the Congressional Budget Office’s estimate of the natural rate) exceeded the 3 percent threshold 13 percent of the time, precisely double the frequency with which it exceeded the 4 percent threshold.  (Had we assumed as in Chair Yellen’s balanced approach rule that policymakers were twice as sensitive to economic slack as in the original Taylor rule—with interest rates responding by two percentage points for each one-percentage-point change in the unemployment gap—the threshold would be only 1½ percent, which the gap exceeded roughly one-quarter of the time since 1990.) 

When we first did this computation two years ago, we did not view this evidence as a compelling reason to advocate raising the inflation target. Well, a brand new Brookings paper from long-time Federal Reserve Board economists Michael E. Kiley and John M. Roberts argues that the impact of lowering r* to 1 percent on the frequency and duration of zero-policy-rate episodes would be far higher than our simple calculations suggest. The table below shows Kiley and Roberts’ simulations using the Fed’s FRB/US model for different values of r*, imposing a standard Taylor policy rule with an inflation target of 2 percent and excluding unconventional policies (such as forward interest rate guidance or quantitative easing). Stunningly, in 500 simulations of 40-year periods, when r* is set equal to 1 percent, the policy rate hits zero 38.3% of the time! Not only that, but zero-rate episodes endure an average 9.8 quarters (with some bouts much longer). Even worse, because the simulated economy lacks sufficient monetary accommodation in these episodes (the policy rate can’t fall below zero and there are no unconventional policy tools), output averages 1.1 percent below its normal level over the full period, while the mean of inflation is only 1.2 percent, well shy of the 2 percent target.

Simulated U.S. Economic Performance for Various Levels of r*

Note: Excerpted from Table 3 of Kiley and Roberts,    Monetary Policy in a Low Interest Rate World   , Brookings Papers on Economic Activity Conference, March 2017. The output gap is the percentage shortfall of actual output from normal (potential) output.

Note: Excerpted from Table 3 of Kiley and Roberts, Monetary Policy in a Low Interest Rate World, Brookings Papers on Economic Activity Conference, March 2017. The output gap is the percentage shortfall of actual output from normal (potential) output.

The authors suggest that the Fed could still achieve its inflation target (and avoid an average shortfall of output) by committing to keep interest rates at zero until the economy makes up for its underperformance in zero-rate episodes. Like a price-level targeting regime, this approach makes policy history dependent: past misses in one direction must be matched by future misses in the other. The implication of this is that, since inflation will be too low during the rather lengthy and frequent episodes when the interest rate is at zero, inflation will need to overshoot the target—possibly substantially—to achieve the 2 percent average over the full period.

Since high inflation is associated with lower unemployment in many workhorse models (the supply side is characterized by a Phillips curve), above-target inflation will be associated with an unemployment rate below the normal, sustainable level. Noting this in his discussion of the paper, Professor Laurence Ball estimates that, in the aftermath of the Great Recession, the Kiley-Roberts overshooting commitment would have led the Fed to keep interest rates at zero until the unemployment rate had plunged close to 2 percent! In our view, the incentives for policymakers to renege on such a commitment limit both its credibility and effectiveness. Raising the inflation target by 1 percentage point seems far easier to explain and implement.

To be sure, raising the inflation target is not costless. One concern is the potential for increased uncertainty that can make household and business decisions less efficient. Importantly, the volatility of inflation—a proxy for uncertainty—is thought to be positively linked to the level of inflation. In a sample of more than 100 countries, we see that a one-percentage point addition to average inflation is associated with a ½-percentage-point rise in the annual standard deviation of inflation (see chart). It may be possible for a vigilant central bank to avoid this—accurately controlling inflation regardless of the level of its target—but if it had the skill and knowledge to do so, this could very well come at the expense of higher short-run output volatility. In addition to heightened uncertainty, other traditional concerns about higher inflation include the tax imposed on capital (due to the lack of indexation for capital gains), the “inflation tax” on holding money, and the potential for nominal illusion that reduces the effectiveness of long-term planning (for a related Fed staff memo, see here).

Average Annual CPI Inflation versus Standard Deviation of Annual Inflation, 1970-2015

Note: The sample of 105 countries includes those with average inflation up to 10 percent and with at least 10 annual observations. Source: World Bank.

Note: The sample of 105 countries includes those with average inflation up to 10 percent and with at least 10 annual observations. Source: World Bank.

Yet, other recent empirical research suggests that higher inflation may be less costly in practice than prevailing theories imply. In many economic models, a major cost of higher inflation is the increase in relative price dispersion that occurs when, because firms set prices at different times and for discrete periods, prices temporarily deviate from their optimal level. A simple way to limit such dispersion, and reduce the costs associated with it, is to implement an inflation target of zero even in the presence of a zero lower bound on the nominal interest rate (see, for example, Schmitt-Grohé and Uribe).

To assess this link between the level of inflation and the extent of inefficient price dispersion, Nakamura, Steinsson, Sun, and Villar undertook a multi-year project to extend the Bureau of Labor Statistics’ micro-data set (which starts in 1987) back to 1977. This expanded data set allows them to study a period of double-digit inflation—far higher than anything we have seen since the early 1980s. Again, quite dramatically, they find no evidence that price dispersion was greater in the high-inflation period. Instead, while the magnitude of price adjustments was stable over decades, an increased frequency of price changes in periods of higher inflation limited inefficient price dispersion.

Finally, higher inflation does have well-known benefits. A key example is that it reduces the labor market distortions arising from downward wage rigidity. Despite a gradual pickup in average wage inflation in recent years, the frequency of zero changes in the distribution of wage changes still sharply exceeds what one would expect if employers were no more reluctant to lower wages than they are to raise them (see our earlier post and the latest wage rigidity meter of the FRB San Francisco). Given that the clustering at zero is lower when wage inflation is higher, it is natural to conclude that this is a source of inefficiency. That is, employers would like to lower real wages—without lowering nominal wages—more frequently than they are able to at low levels of inflation. Higher average nominal wage gains, which would naturally result from a higher inflation target, would reduce this inefficiency (see here).

Another advantage of higher inflation is that it may simply offset measurement bias in existing price indexes. Indeed, new work by Redding and Weinstein (see here and here) suggests that traditional price indexes overstate inflation by more than previously believed. Unlike existing price measures, their unified price index allows for shifting consumer demand at the level of individual goods, a feature that microeconomists commonly rely on to account for observed prices and consumer spending patterns. Ignoring these demand shifts creates a bias that has not previously been analyzed, partly because standard datasets mask demand shifts by bundling heterogenous goods even in their narrowest categories. Using a big data set at the individual product level, Redding and Weinstein estimate this unmeasured consumer valuation bias to be greater than one percent annually, similar to the bias of ignoring the changing mix of goods in traditional price indexes.

So, what to conclude? We consider these four developments important and likely to influence the professional consensus in favor of a higher inflation target. Nevertheless, we cannot quite bring ourselves to advocate such a policy shift. The reputational capital of central bankers in the United States and abroad is fully invested in making the 2 percent target credible. The Bank of England’s decades-long public information campaign, including student competitions like Target 2.0, is just one high-profile example. We expect that any alteration to this bedrock commitment would immediately lead people to question whether further changes were going to follow. The result would be heightened uncertainty and, possibly, lower real growth, at least for a transition period.

Consequently, the hurdle for change must be high, and should require broad-based consensus that cannot be achieved in a debate solely among experts. The case also would have to be made in a compelling way not only to the elected representatives who oversee the central banks, but to the general public as well. With the views of professionals still evolving—and with their exploitation of relevant micro-level price and wage data still in its infancy—this broader consensus remains a long way off.

Acknowledgements: We are grateful to Laurence Ball, Michael Kiley and Emi Nakamura for very helpful suggestions.