Ricardian Equivalence, Benchmark Models, and Academics Response to the Financial Crisis
Simon Wren-Lewis:
Ricardian Equivalence, benchmark models, and academics response to the financial crisis: In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:
“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified. ...
Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]
Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.
Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.
I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.
In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about... These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.
Posted by Mark Thoma on Tuesday, October 11, 2016 at 09:08 AM in Economics, Macroeconomics, Methodology |
Permalink
Comments (35)
You can follow this conversation by subscribing to the comment feed for this post.