« "The GOP Has a Dumb Mortgage Idea" | Main | "The Action Americans Need" »

Thursday, February 05, 2009

"The Great Multiplier Debate"

This is very good. I have some follow-up comments at the end:

Why Can't We All Just Get Along? The Great Multiplier Debate, by Menzie Chinn: I've been thinking about why the numbers that are typically bandied about in policy circles (at least that I'm familiar with) have so little impact on the overall general and blogosphere debate (see some examples here and here). I think it's part ideological, and part methodological. I can't do much about the first (e.g., tax cuts good, spending on goods and services bad -- unless on defense; or alternatively "let the market adjust no matter how long it takes"). But at least I can lay out why reasons why there is disagreement on the size of the multipliers. ...

The starting point in the analysis is to realize that there are three key ways in which to obtain "multipliers".

  • Estimation of structural macroeconometric models, with identification a la the Cowles Commission approach.
  • Calibration of microfounded models (including real business cycle models, and New Keynesian dynamic stochastic general equilibrium models).
  • Estimation of vector autoregressions (VARs) and associated impulse-response functions, with identification achieved by a variety of means.

Traditional macroeconometric models. Most of the estimates I have cited [1] [2] are based upon the first approach. One estimates a model with many equations, including the components of aggregate demand (C, I, G, X, M), supply side (price setting, wage setting), and potential GDP. The framework most popular in policy circles is one that might be characterized as "the neoclassical synthesis", wherein wherein prices are sticky in the short run, and perfectly flexible in the long run. ... Now even within this category, there is a wide diversity of specifications...

A key reason for the academic disenchantment with these types of models included the view that the identification schemes used were untenable (e.g., why is income in the consumption function but not in the investment?). Another source is the combined impact of the inflationary 1960's and 1970's, and the Lucas Critique. On the latter point, I'd point out that unless policy changes are really massive, the Lucas Critique (a.k.a. Econometric Policy Evaluation Critique) isn't really relevant (see [1]).

Models with micro-foundations in general equilibrium Micro-founded models are often associated with real business cycle models. However, the association is not one-for-one. It's true the early real business cycle models worked off of utility functions and production functions. But the modern generation of dynamic stochastic general equilibrium (DSGE) models in the new Keynesian mode incorporate microfoundations as well (utility functions, production functions, investment functions, etc.) but also incorporate rigidities such as price stickiness. Purists will say everything has to be microfounded. Well, that's a matter of taste, but the fact of the matter is that it's very hard to calibrate simple real business cycle models without rigidities to match the moments of actual real world data, even after the data's been HP-filtered (I'm sure this blanket statement will get me in trouble, but I think that that's a fair assessment). So DSGEs do better at mimicking real data, especially after numerous rigidities are incorporated. ... For a survey of how DSGEs have been incorporated into policy analysis, see the survey by UW PhD Camilo Tovar.

It's useful at this point to ask how are these models calibrated? For the deep parameters (intertemporal rate of substitution, for instance), one can rely upon some estimates -- then pick the one that you like (and is in the range of estimates). Oftentime, the combination of parameter values is selected to mimic the time series properties of actual (filtered) data. So say one believes one should not appeal to ad hoc Keynesian models. It's not clear that RBCs or DSGEs get you away from the problem that one has to appeal to the data to get multipliers since the models are calibrated to mimic real world data. In other words, while the theoretical bases of the models may differ,... the differences in terms of multipliers might not be as big as one might think.

VARs Vector autoregressions are regressions of multiple variables on lags of themselves. The underlying shocks can be identified by putting them in a recursive ordering (called a Cholesky decomposition), or using restrictions based on theory (say, money has no contemporaneous impact on prices; or money has no impact on output in the long run). VARs were initially proposed as a way of getting around "incredible identifying assumptions", in the Cowles Commission approach to econometrics embodied in the old style macroeconometric models. But of course, people can disagree about which restrictions make the most economic sense. (For instance money is neutral in the long run seems natural, but not all theoretical models have that implication.)

The much cited Romer and Romer model of fiscal policy impacts is a particular sort of VAR, in which only one equation is focused on, and extra-model information is used to identify exogenous tax changes (remember, they don't analyze government spending changes). ... A good summary of where these types of fiscal multipliers come from was in Box 2.1 in Chaper 2 of the April 2008 World Economic Outlook.

My bottom line There are indeed a wide variety of estimates regarding the size of multipliers. Different models -- and assumptions within those model categories -- lead to different estimates. It's important to understand the underpinnings of those estimates (and this is where many of the people who cited the Romer and Romer study went wrong). Hence, one has to have an understanding of the very complicated models before taking strong stands in favor of one estime over another.

In my experience, as far as policy organizations such as central banks, government agencies and multilateral agencies go, reference is made to a number of models. Their assessments of multiplier magnitudes will then reflect some weighting of the various model predictions. That is why I will put more wieght upon assessments by organizations (that have to make decisions upon these judgments) than a single academic study, regardless of how well I respect the academics involved (and sometimes, these academics are working outside their area of research expertise...)

As an aside, here are the impacts of various fiscal experiments in response to a negative shock in a DSGE developed by the IMF... Notice that the public investment shows the biggest impact, while under the base assumptions the impact of transfers and tax cuts are about the same.

Models are built to answer specific questions. If you are interested in traveling to an unknown area of the city, then a street map - a model of the world that answers the how do I get there question - is useful. It is even more useful if it is built specifically for your question. If I am traveling by car, I want to know where the roads are, a map showing all of the underground conduits in the city and nothing else is useless. But suppose I try to use a map designed for a car to find the best route for traveling by bicycle. I can probably find a way there using this map, but it may not give the best possible answer. Some roads may be hard to travel on, it may not show smaller roads that a bicycle rider might want to take, and if it only shows, say, freeways where bikes are prohibited, then it isn't much use at all. But more importantly, it may omit bike routes. I don't know how it is where you live, but here there is a fairly extensive set of bike paths that are separate from the roads used by cars. And these routes can save you lots of time - if you know about them, i.e. if your map shows them. So the best map would show bike routes and roads suitable for bikes, and omit everything else, and it could also show things like elevation changes if it's an areas with lots of hills (which isn't usually part of a road map).

My point is this. Since the Great Moderation, and since Lucas convinced everyone that growth is where all the action is, the questions we have been asking have been mainly about growth and, where stabilization is concerned, about the use of optimal monetary policy rules to offset economic fluctuations (and whether monetary policy was responsible for the Great Moderation). Thus, the models were built to look at these particular questions. Nobody, or hardly anybody, was asking questions about the use of fiscal policy to stabilize the economy. Hence very few models were built to look at this issue. Because of that, because we are essentially using a map designed to guide cars to ask questions about the best way to travel by bicycle, we may not get the best answer to the question (that is, the answer to the question of what's the fastest way to get there - akin to asking about the fastest way for the economy to recover - may be wrong).

Why was fiscal policy ignored? Two reasons come to mind. First, we thought the economy was much more flexible than ever before. If a shock hit, even a big one, it might cause a bit of a pause, but the economy would quickly recover and keep on growing. It was the Terminator economy, and there was no shock big enough to keep it from quickly reassembling itself and picking up where it left off. And the two recessions during that time period, along with the experience of 9-11 and Katrina, all led people to believe that the economy had, in fact, undergone a shift and was now more robust than ever. The exact source of this shift was the subject of great debate, was it monetary policy, financial innovation, just good luck, better technology such as computers, the list was very long, but whatever the cause, the shift itself was taken to be permanent. Thus, the need for stabilization policy, fiscal policy in particular, was believed to be greatly diminished. The fact that fiscal policy might be needed in a deep recession because monetary policy would be rendered ineffective was discounted and ignored because the the belief was that a deep recession couldn't occur, the economy was too robust and flexible for that.

The second factor was the belief that if stabilization policy is needed, monetary policy was superior in every way to fiscal policy. Monetary policy could be implemented faster, with less distortions, it could be reversed quickly, it was in the hands of independent, public minded shepherds, there wasn't any dimension, or so it was thought, upon which fiscal policy would be better than monetary policy, and vast amounts of research were devoted to getting the monetary policy component of stabilization policy correct. In the process, fiscal policy was dismissed as irrelevant, at least as a stabilization tool, and largely ignored by researchers  (fiscal policy was still used to try to promote growth - that's the whole supply-side argument about cutting taxes, but not as a stabilization tool). That's not to say government spending and taxes weren't included in models and analyzed theoretically, or even empirically, but to the extent that happened, the questions were not focused on how fiscal policy could be used as a stabilization tool -- the models were not constructed to answer this question.

So it shouldn't surprise us that most of the estimates on multipliers come from the old style, many equation, structural macroeconometric models, the traditional approach described above. These models were built at a time when questions about fiscal policy were in the forefront, so answers to fiscal policy questions come out of this framework easily. For this reason, because the models were built with this question in mind, there is an abundance of evidence about fiscal policy multipliers, particularly government spending multipliers, from research conducted during this time period. The next approaches to macro modeling and estimation, the micro-founded models and the VAR models, came into being as the fiscal policy question was falling by the wayside (most VAR models do not even include government spending and taxes). Thus, as you may have noticed, there isn't much in the way of evidence from these models that we can reply upon (and that's not even considering the fact that we have very little data from recessionary episodes to inform us on these issues). The models will be built - I guarantee you they are being built presently - but for now we have what we have.

    Posted by on Thursday, February 5, 2009 at 12:33 AM in Economics, Fiscal Policy, Macroeconomics, Methodology, Monetary Policy | Permalink  TrackBack (0)  Comments (34)



    TrackBack URL for this entry:

    Listed below are links to weblogs that reference "The Great Multiplier Debate":


    Feed You can follow this conversation by subscribing to the comment feed for this post.