There is a paper by Stock and Watson that I posted today for my records. That paper, and this column by Caroline Baum asking why forecasters don’t use The Index of Leading Economic Indicators (LEI), reminded me of this old paper of theirs “New Indexes of Coincident and Leading Economic Indicators.” Caroline Baum asks:
Forecasters Rely on Today to Predict Tomorrow, Caroline Baum, Bloomberg: ...The sentiment shift, based on high-frequency data, is even harder to understand in light of the economy's steady performance. ... Weak numbers yield a weak outlook. Strong numbers mean good times ahead. Where's the forecasting?
The Index of Leading Economic Indicators, which isn't a bunch of randomly selected components, is signaling slower, not faster, growth ahead. The 10 components of the LEI were all chosen because of a demonstrated ability to predict future economic activity. ... The level of the LEI, which "has an average eight to nine months lead time at peaks and troughs -- shorter at troughs -- has flattened out after strong growth in 2003 and 2004,'' Ozyildirim said. As long as the weekly and monthly numbers come in strong, economists will be guided by mostly contemporaneous indicators released with a lag. How come no one follows the leaders?
The paper by Stock and Watson linked above and the work that followed in the next 15 years or so look at these issues in considerable detail and answer the questions raised in the column. For those interested, the question of the optimality of the LEI and other indicators for use in economic forecasting has been examined extensively with Stock and Watson leading voices in this area. A very, very quick search of "Stock Watson Forecasting GDP" in Google Scholar turns up the following papers on this topic (some of the abstracts are below for quick reference). One more note. The comments about forecasters being swayed by high frequency data is why repeated warnings have been issued on this very topic (example), the latest being this nice piece of work on David Altig's site to extract the trend component of the CPI from the high frequency noise. Here are some papers relating to the use of the LEI and forecasting more generally:
- New Indexes of Coincident and Leading Economic Indicators, Stock and Watson 1990
- A Procedure for Predicting Recessions With Leading Indicators: Econometric Issues and Recent Experience, Stock and Watson 1992
- This is what the leading indicators lead, Maximo Camacho and Gabriel Perez-Quiros, 2002
- How did leading indicator forecasts do during the 2001 recession, Stock and Watson 2003
- Forecasting Output and Inflation: The Role of Asset Prices, Stock and Watson 2003
- Forecasting with Many Predictors, Stock and Watson 2004
This is just a quick sample - there is lots, lots more on this topic. If you are interested in LEIs, it would be a good idea to read this on recent changes in how the yield spread is incorporated into the LEI.
New Indexes of Coincident and Leading Economic Indicators, Stock and Watson 1990:
During six weeks in late 1937, Wesley Mitchell, Arthur Burns, and their colleagues at the National Bureau of Economic Research developed a list of leading, coincident, and lagging indicators of economic activity in the United States as part of the NBER research program on business cycles. Since their development, these indicators, in particular the leading and coincident indexes constructed from these indicators, have played an important role in summarizing and forecasting the state of macroeconomic activity. The paper reports the results of a project to revise the indexes of leading and coincident economic indicators using the tools of modern time series econometrics. This project addresses three central questions. The first is conceptual: is it possible to develop a formal probability model that gives rise to the indexes of leading and coincident variables? Such a model would provide a concrete mathematical framework within which alternative variables and indexes could be evaluated. Second, given this conceptual framework, what are the best variables to use as components of the leading index? Third, given these variables, what is the best way to combine them to produce useful and reliable indexes? The results of this project are three experimental monthly indexes: an index of coincident economic indicators (CEI), an index of leading economic indicators (LEI), and a Recession Index. The experimental CEI closely tracks the coincident index currently produced by the Department of Commerce (DOC), although the methodology used to produce the two series differs substantially. The growth of the experimental CEI is also highly correlated with the growth of real GNP at business cycle frequencies. The proposed LEI is a forecast of the growth of the proposed CEI over the next six months constructed using a set of leading variables or indicators. The Recession Index, a new series, is the probability that the economy will be in a recession six months hence, given data available through the month of its construction. This article is organized as follows. Section 2 contains a discussion of the indexes and a framework for their interpretation. Section 3 presents the experimental indexes, discusses their construction, and examines their within-sample performance. In Section 4, the indexes are considered from the perspective of macroeconomic theory, focusing in particular on several salient series that are not included in the proposed leading index. Section 5 concludes.
This paper examines the forecasting performance of various leading economic indicators and composite indexes since 1988, in particular during the onset of the 1990 recession. The primary focus is on an experimental recession index (title "XRI"), a composite index which provides probabilistic forecasts of whether the U.S. economy will be in a recession six months hence. After detailing its construction, the paper examines the out-of-sample performance of the XRI and a related forecast of overall economic growth, the experimental leading index (XLI). These indexes performed well from 1988 through the summer of 1990 - for example, in June 1990 the XLI model forecasted a .4% (annual rate) decline in the experimental coincident index from June through September, when in fact the decline was only slightly greater, .8%. However, the XLI failed to forecast the sharp declines of October and November 1990. After exploring several possible explanations, we conclude that one important source of the forecast error was the use of financial variables during a recession that was not associated with a particularly tight monetary policy. Financial indicators - and the experimental index - were not alone, however, in failing to forecast the 1990 recession. An examination of 45 economic indicators shows that almost all failed to forecast the 1990 downturn and the few that did provided unclear signals before the recessions of the 1970s and 1980s.
How did leading indicator forecasts do during the 2001 recession, Stock and Watson 2003
The 2001 recession differed from other recent recessions in its cause, severity, and scope. This paper documents the performance of professional forecasters and forecasts based on leading indicators as the recession unfolded. Professional forecasters found this recession a difficult one to forecast. A few leading indicators (stock prices, term spreads, unemployment claims) predicted that growth would slow, but none predicted the sharp economic slowdown. Several previously reliable leading indicators (housing starts, orders for new capital equipment, consumer sentiment) provided no early warning signals. When combined, the leading indicator performed somewhat better than a benchmark autoregressive forecasting model.
Forecasting with Many Predictors, Stock and Watson 2004
Academic work on macroeconomic modeling and economic forecasting historically has focused on models with only a handful of variables. In contrast, economists in business and government, whose job is to track the swings of the economy and to make forecasts that inform decision-makers in real time, have long examined a large number of variables. In the U.S., for example, literally thousands of potentially relevant time series are available on a monthly or quarterly basis. The fact that practitioners use many series when making their forecasts – despite the lack of academic guidance about how to proceed – suggests that these series have information content beyond that contained in the major macroeconomic aggregates. But if so, what are the best ways to extract this information and to use it for real-time forecasting? This chapter surveys theoretical and empirical research on methods for forecasting economic time series variables using many predictors, where “many” can number from scores to hundreds or, perhaps, even more than one thousand. Improvements in computing and electronic data availability over the past ten years have finally made it practical to conduct research in this area, and the result has been the rapid development of a substantial body of theory and applications. This work already has had practical impact – economic indexes and forecasts based on many-predictor methods currently are being produced in real time both in the US and in Europe – and research on promising new methods and applications continues. Forecasting with many predictors provides the opportunity to exploit a much richer base of information than is conventionally used for time series forecasting. Another, less obvious (and less researched) opportunity is that using many predictors might provide some robustness against the structural instability that plagues lowdimensional forecasting. But these opportunities bring substantial challenges. Most notably, with many predictors come many parameters, which raises the specter of overwhelming the information in the data with estimation error. For example, suppose you have twenty years of monthly data on a series of interest, along with 100 predictors. A benchmark procedure might be using ordinary least squares (OLS) to estimate a regression with these 100 regressors. But this benchmark procedure is a poor choice. Formally, if the number of regressors is proportional to the sample size, the OLS forecasts are not first-order efficient, that is, they do not converge to the infeasible optimal forecast. Indeed, a forecaster who only used OLS would be driven to adopt a principle of parsimony so that his forecasts are not overwhelmed by estimation noise. Evidently, a key aspect of many-predictor forecasting is imposing enough structure so that estimation error is controlled (is asymptotically negligible) yet useful information is still extracted. Said differently, the challenge of many-predictor forecasting is to turn dimensionality from a curse into a blessing.
This is what the leading indicators lead, Maximo Camacho and Gabriel Perez-Quiros, 2002
We propose an optimal filter to transform the Conference Board Composite Leading Index (CLI) into recession probabilities in the US economy. We also analyse the CLI's accuracy at anticipating US output growth. We compare the predictive performance of linear, VAR extensions of smooth transition regression and switching regimes, probit, non-parametric models and conclude that a combination of the switching regimes and non-parametric forecasts is the best strategy at predicting both the NBER business cycle schedule and GDP growth. This confirms the usefulness of CLI, even in a real-time analysis.
Forecasting Output and Inflation: The Role of Asset Prices, Stock and Watson 2003
Because asset prices are forward-looking, they constitute a class of potentially useful predictors of inflation and output growth. The premise that interest rates and asset prices contain useful information about future economic developments embodies foundational concepts of macroeconomics: Irving Fisher’s theory that the nominal interest rate is the real rate plus expected inflation; the notion that a monetary contraction produces temporarily high interest rates— an inverted yield curve—and leads to an economic slowdown; and the hypothesis that stock prices reflect the expected present discounted value of future earnings. Indeed, Wesley Mitchell and Arthur Burns (1938) included the Dow Jones composite index of stock prices in their initial list of leading indicators of expansions and contractions in the U.S. economy. The past fifteen years have seen considerable research on forecasting economic activity and inflation using asset prices, where we interpret asset prices broadly as including interest rates, differences between interest rates (spreads), returns, and other measures related to the value of financial or tangible assets (bonds, stocks, housing, gold, etc.). This research on asset prices as leading indicators arose, at least in part, from the instability in the 1970s and early 1980s of forecasts of output and inflation based on monetary aggregates and of forecasts of inflation based on the (non-expectational) Phillips curve. One problem with using monetary aggregates for forecasting is that they require ongoing redefinition as new financial instruments are introduced. In contrast, asset prices and returns typically are observed in real time with negligible measurement error. The now-large literature on forecasting using asset prices has identified a number of asset prices as leading indicators of either economic activity or inflation; these include interest rates, term spreads, stock returns, dividend yields, and exchange rates. This literature is of interest from several perspectives. First and most obviously, those whose daily task it is to produce forecasts—notably, economists at central banks, and business economists—need to know which, if any, asset prices provide reliable and potent forecasts of output growth and inflation. Second, knowledge of which asset prices are useful for forecasting, and which are not, constitutes a set of stylized facts to guide those macroeconomists mainly interested in understanding the workings of modern economies. Third, the empirical failure of the 1960svintage Phillips curve was one of the crucial developments that led to rational expectations macroeconomics, and understanding if and how forecasts based on asset prices break down could lead to further changes or refinements in macroeconomic models. This article begins in section 2 with a summary of the econometric methods used in this literature to evaluate predictive content. We then review the large literature on asset prices as predictors of real economic activity and inflation. This review, contained in section 3, covers 93 articles and working papers and emphasizes developments during the past fifteen years. We focus exclusively on forecasts of output and inflation; forecasts of volatility, which are used mainly in finance, have been reviewed recently in Ser-Huang Poon and Clive Granger (2003). Next, we undertake our own empirical assessment of the practical value of asset prices for short- to medium-term economic forecasting; the methods, data, and results are presented in sections 4–7. This analysis uses quarterly data on as many as 43 variables from each of seven developed economies (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) over 1959–99 (some series are available only for a shorter period). Most of these predictors are asset prices, but for comparison purposes we also consider selected measures of real economic activity, wages, prices, and the money supply. Our analysis of the literature and the data leads to four main conclusions. First, some asset prices have substantial and statistically significant marginal predictive content for output growth at some times in some countries. Whether this predictive content can be exploited reliably is less clear, for this requires knowing a priori what asset price works when in which country. The evidence that asset prices are useful for forecasting output growth is stronger than for inflation. Second, forecasts based on individual indicators are unstable. Finding an indicator that predicts well in one period is no guarantee that it will predict well in later periods. It appears that instability of predictive relations based on asset prices (like many other candidate leading indicators) is the norm. Third, although the most common econometric method of identifying a potentially useful predictor is to rely on in-sample significance tests such as Granger causality tests, doing so provides little assurance that the identified predictive relation is stable. Indeed, the empirical results indicate that a significant Granger causality statistic contains little or no information about whether the indicator has been a reliable (potent and stable) predictor. Fourth, simple methods for combining the information in the various predictors, such as computing the median of a panel of forecasts based on individual asset prices, seem to circumvent the worst of these instability problems. Some of these conclusions could be interpreted negatively by those (ourselves included) who have worked in this area. But in this review we argue instead that they reflect limitations of conventional models and econometric procedures, not a fundamental absence of predictive relations in the economy: the challenge is to develop methods better geared to the intermittent and evolving nature of these predictive relations. We expand on these ideas in section 8.