### Changes in the Aggressiveness of Monetary Policy Toward Inflation

This post looks at how the coefficient measuring the aggressiveness of the Fed toward inflation has changed over time (non-technical summary after the graph). The model used is a version of the Taylor rule:

where i is the federal funds rate, y "tilde" is the percentage deviation from potential output (from the CBO), π is the year over year inflation rate, and w is the error term that captures monetary policy shocks. All the variable definitions follow the original Taylor paper, though the presence of the lagged federal funds rate on the right-hand side of the equation (for "smoothness" or other reasons) makes the model different from Taylor's. In fact, due to the presence of the lagged interest rate terms, it is necessary to transform the coefficients as follows (I'll leave the details about why aside):

Here is a graph of the coefficient in the middle, i.e. the value of the coefficient measuring the Fed's response to inflation. The graph is constructed using rolling regressions. That is, first the model is estimated using quarterly data from 1960:2 through 1974:2 (the first fifteen years) and the coefficient on the inflation term in the estimated policy rule is saved. Then, one observation is added to the end of the data set, the estimation is conducted over the range 1960:2 through 1974:3, and the new estimate of the inflation coefficient is saved. This is repeated, i.e. at each step in the process one observation is added to the end of the data set and the estimated coefficient is saved, until the model is estimated for the full sample, 1960:2 through 2006:2. [The sample ends in 2006:2 because this is from a paper I am preparing for resubmission and that is the sample used in the paper. Also, if you roll the start point forward as well, i.e. roll a window through the data, you see a noisier version of the same basic result, and the extra variability does cause some of the estimates after 1980 to fall below 1.0 (even more fall below 1.0 with a smaller window). As you would expect, smaller windows, e.g. 10 years instead of 15, increases the variation in the estimates. Both procedures, rolling a window or just rolling the endpoint, have their good and bad points, the rolling endpoint results shown below duplicate what econometricians would have estimated at each point in time using these data, though see the note below on using revised versus real-time data. Update: Here's a graph showing a 20 year window rolled through the data. A window size of 20 years is used because smaller windows produce so much volatility in the estimates that it's hard to see much of a pattern.] Here's the graph:

Notice how, prior to around 1980, the coefficient on inflation is less than
one, then it increases to a value larger than one thereafter.

Why should we care? There is more to say than this, but here are a three reasons these results are of interest:

1. In many modern New Keynesian models, there is a problem known as indeterminacy that arises when the inflation coefficient is less than one. If the Fed does not pursue an aggressive response to inflation - one that is more than one-to-one with any uptick in the inflation rate - indeterminacy can arise in the model (i.e. if the inflation rate increases by 1%, the ff must be increased by more than 1% to avoid this problem).

Indeterminacy is often used to explain the problems we had with inflation and output in the 1970s after the oil price shocks, and the lack of problems in the more recent oil price shock episodes. It is the aggressive response to inflation after 1980 that makes the difference.

[One note about these results. If you use revised data to do the estimation, you get the results shown in the graph. However, if you follow Orphanides and use real-time data, i.e. the unrevised data actually available to policymakers when they were making decisions, the evidence for a break in the coefficient around 1980 is weakened or vanishes altogether. Which data to use is controversial, and perhaps something we can take up another time.]

2. One of the puzzles in macroeconomics is explaining the Great Moderation, i.e. the fall in output variability and the decline in the inflation rate in the mid 1980s (see the bottom of this post for a list of reasons that have been suggested to explain the Great Moderation). This graph suggests that one reason was better policy, i.e. the increase in the inflation coefficient in the early 1980s led to the subsequent moderation in inflation and output.

This graph also helps to understand why finding the source of the Great Moderation is so hard. This parameter can be used to argue that the change in volatility was due to better monetary policy, but there are other changes such as the use of computer technology that occur simultaneously, and since all of these changes occur at roughly the same time, i.e. in the early 1980s, sorting one cause from the other is very difficult.

3. The increase in the inflation parameter toward the end of the sample is something I hadn't seen before constructing this graph. This implies that somewhere around 2001 the Fed began responding more aggressively to inflation and this continued through 2003 when the coefficient reached a new peak, then declined slightly through the end of the sample. But currently the parameter, which measures how aggressive the Fed is when inflation hits the economy, is higher than it has been in recent decades indicating a stronger stance toward inflation than has been typical.

**Update**: There is more discussion in comments that I probably should have included about how to interpret these results.

Posted by Mark Thoma on Monday, July 16, 2007 at 11:01 AM in Economics, Inflation, Monetary Policy |
Permalink
TrackBack (0)
Comments (12)

You can follow this conversation by subscribing to the comment feed for this post.