The Use of Real-Time versus Revised Data in Monetary Policy Research
This is wonkish, and it's long. It's lonkish. It explains the use of real-time versus revised data in monetary policy research.
In the analysis of monetary policy there has been reference to the use of real-time data (e.g. see here and here). Because this is an important issue in the theoretical and empirical analysis of monetary policy, I thought I would explain what this issue is and why it matters.
The difference in the two types of data is easily explained. At the time policymakers decide policy, for example at the upcoming FOMC meeting this week, the data they will have available is real-time data. However, after the meeting, there will be substantial revisions in some of the key series used to guide policy. For example, as in this recent release from the BEA (5/26/05 ), 1st quarter GDP growth was revised upward from 3.1% to 3.5% a fairly large increase. Revised data are considered more accurate reflections of the economy, so in the past these data were almost always used in empirical investigations.
But that does not mean that revised data best reflect what policymakers use when choosing a course for monetary policy. Real-time data may be much better suited to analyzing policy decisions. To me, there are two arguments here. First, real-time data is what’s actually available in terms of hard numbers and that speaks towards its use. But there are many more signals about the economy than are reflected in a few summary statistics and perhaps the revised data better reflect what policymakers actually know as the contemplate future policy. It would be nice if it didn’t matter, but it does. There are test of important theoretical results that change depending upon which type of data are used.
A good place to start on these issues is Boivin (NBER May, 2005, this link is to a free April version), and there is more to this paper than just an analysis of what type of data to use, but that’s the focus of this post. I’ll present a summary of Boivin's paper later, but let’s go back to one of the seminal papers on the use of real-time data by Orphanides (AER 2001, JMCB 2003, these are free versions, not links to the actual published journal papers) who has led the charge on this issue.
Here’s the important issue. In the Taylor rule, there is a coefficient on the deviation of inflation from its target value. If this coefficient is greater than one, then the economy is stable and well-behaved according to popular theoretical models of the U.S. economy. However, if the coefficient is less than one, then the economy can potentially be less stable (I hope my colleagues will forgive me for using imprecise language here and not attempting an in depth discussion of indeterminacy, sunspots, and so on, and for not elucidating the two separate theoretical reasons for the change after 1980 noted by Clarida, et. al.).
An important paper by Clarida, Gali, and Gertler (QJE 2000) produces empirical evidence showing that prior to 1980 the coefficient on inflation in the Taylor rule was less than one, i.e. in the range of indeterminacy, but after 1980 it was greater than one, in the stable range. As Clarida, Gali, and Gertler note “…the economy exhibits greater stability under the post-1979 rule than under a rule that closely approximates monetary policy pre-1979.” Thus, according to these estimates, a substantial change in monetary policy occurred around 1980 that caused the economy to become much more stable.
This is where real-time data come in. Orphanides (2001, 2003) weighed in with a ‘wait just minute’ paper. He showed that if you use real-time rather than revised data, the differences pre and post-1980 are not so stark. In fact, he finds that the coefficient on inflation is greater than one in both time periods implying that monetary policy had followed a rule that produced a stable outcome for the economy in both time periods.
That brings us to a more recent word on this issue, and a good place to begin tracking back, Boivin (NBER 2005). Here’s his summary of the results:
Despite the large amount of empirical research on monetary policy rules, there is surprisingly little consensus on the nature or even the existence of changes in the conduct of U.S. monetary policy. Three issues appear central to this disagreement: 1) the specific type of changes in the policy coefficients, 2) the treatment of heteroskedasticity, and 3) the real-time nature of the data used. This paper addresses these issues in the context of forward-looking Taylor rules with drifting coefficients. The estimation is based on real-time data and accounts for the presence of heteroskedasticity in the policy shock. The findings suggest important but gradual changes in the rule coefficients, not adequately captured by the usual split-sample estimation. In contrast to Orphanides (2002, 2003), I find that the Fed's response to the real-time forecast of inflation was weak in the second half of the 1970's, perhaps not satisfying Taylor's principle as suggested by Clarida, Galìì and Gertler (2000). However, the response to inflation was strong before 1973 and gradually regained strength from the early 1980's onward. Moreover, as in Orphanides (2003), the Fed's response to real activity fell substantially and lastingly during the 1970's.
So, using real-time data it is not so clear that the Fed followed a stable rule prior to 1980 after all. What is the answer? Was the coefficient on inflation in the Taylor rule greater of less than one prior to 1980? What other results change when real-time data are used? This is an active area of research. I'll have to let you know...
Posted by Mark Thoma on Sunday, June 26, 2005 at 07:20 PM in Academic Papers, Economics, Macroeconomics, Methodology, Monetary Policy |
Permalink
TrackBack (0)
Comments (0)
You can follow this conversation by subscribing to the comment feed for this post.