« Discussion Question: Are Republicans Overplaying Their Hand? | Main | "A New, Progressive Washington Consensus Remains a Feasible Goal" »

Sunday, April 17, 2011

Empirical and Historical Validation of Macroeconomic Models

On may occasions, I have made the point that one of the problems with macroeconomic theory prior to the financial crisis is that the models did not have the necessary connections between financial intermediation and the real sector built into them. Thus, when the financial sector crashed, the models had no effective way of translating that into a prediction about what would happen to the real economy, and little to offer in terms of advice about how to minimize the effects of the financial crash on employment and output. There are many reasons for this, including the difficulties associated with modeling the financial sector in a representative agent framework, but it's not quite correct to say that these models did not exist at all.

They did exist, e.g. the Bernanke, Gertler, and Gilchrist financial accelerator models, but when economists tested these models against the data, they did not seem to explain much of the variation in output over time. Thus, they were deemed empirically irrelevant and largely set aside in favor of other pursuits.

That was a mistake, but what is the lesson? One is that we should not necessarily ignore something just because it cannot be found in the data. Much of the empirical work prior to the crisis involved data from the early 1980s to the present (due to an assumption of structural change around that time), sometimes the data goes back to 1959 (when standard series on money end), and occasionally empirical work will use data starting in 1947. So important, infrequent events like the great Depression are rarely even in the data we use to test our models. Things that help to explain this episode may not seem important in limited data sets, but we ignore these possibilities at our own peril.

But how do we know which things to pay attention to if the data isn't always the best guide? We can't just say anything is possible no matter what the data tell us, that's not much of a guide on where to focus our attention.

The data can certainly tell us which things we should take a closer look at. If something is empirically important in explaining business cycles (or other economic phenomena ), that should draw our attention.

But things that do not appear important in data since, say, 1980 should not necessarily be ignored. This is where history plays a key role in directing our attention. If we believe that a collapse of financial intermediation was important in the Great Depression (or in other collapses in the 1800s), then we should ask how that might occur in our models and what might happen if it did. You may not find that the Bernanke, Gertler, Gilchrist model is important when tested against recent data, but does it seem to give us information that coincides with what we know about these earlier periods? We can't do formal tests in these cases, but there is information and guidance here. Had we followed it -- had we remembered to test our models not just against recent data but also against the lessons of history -- we might have been better prepared theoretically when he crisis hit.

There are important lessons in the historical record that cannot be found in FRED. We would do well to remember that.

    Posted by on Sunday, April 17, 2011 at 12:15 PM in Economics, Macroeconomics, Methodology | Permalink  Comments (15)


    Feed You can follow this conversation by subscribing to the comment feed for this post.