The recent blow-up surrounding Niall Ferguson's comments on Keynes' concern for long-run issues prompted my latest column:
The claim that Keynesians are indifferent to the long-run is one of many myths about Keynesian economics.
The recent blow-up surrounding Niall Ferguson's comments on Keynes' concern for long-run issues prompted my latest column:
The claim that Keynesians are indifferent to the long-run is one of many myths about Keynesian economics.
Paul Krugman on how to tell when someone is "pretending to be an authority on economics":
Keynes, Keynesians, the Long Run, and Fiscal Policy: One dead giveaway that someone pretending to be an authority on economics is in fact faking it is misuse of the famous Keynes line about the long run. Here’s the actual quote:
But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.
As I’ve written before, Keynes’s point here is that economic models are incomplete, suspect, and not much use if they can’t explain what happens year to year, but can only tell you where things will supposedly end up after a lot of time has passed. It’s an appeal for better analysis, not for ignoring the future; and anyone who tries to make it into some kind of moral indictment of Keynesian thought has forfeited any right to be taken seriously. ...
I thought the target of these remarks had forfeited any right to be taken seriously long ago (except, of course and unfortunately, by Very Serious People). [Krugman goes on to tackle several other topics.]
This is very wonkish, but it's also very important. The issue is whether DSGE models used for policy analysis can properly capture the relative costs of deviations of inflation and output from target. Simon Wren-Lewis argues -- and I very much agree -- that the standard models are not a very good guide to policy because they vastly overstate the cost of inflation relative to the cost of output (and employment) fluctuations (see the original for the full argument and links to source material):
Microfounded Social Welfare Functions, by Simon Wren-Lewis: More on Beauty and Truth for economists
... Woodford’s derivation of social welfare functions from representative agent’s utility ... can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.
Where does this result come from? ... Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations... Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.
Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.
So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.
Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.
Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero...“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”
As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.
 Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.
Two essays on the state of macroeconomics:
First, David Romer argues our recent troubles are an extreme version of an ongoing problem:
... As I will describe, my reading of the evidence is that the events of the past few years are not an aberration, but just the most extreme manifestation of a broader pattern. And the relatively modest changes of the type discussed at the conference, and that in some cases policymakers are putting into place, are helpful but unlikely to be enough to prevent future financial shocks from inflicting large economic harms.
Thus, I believe we should be asking whether there are deeper reforms that might have a large effect on the size of the shocks emanating from the financial sector, or on the ability of the economy to withstand those shocks. But there has been relatively little serious consideration of ideas for such reforms, not just at this conference but in the broader academic and policy communities. ...
He goes on to describe some changes he'd like to see, for example:
I was disappointed to see little consideration of much larger financial reforms. Let me give four examples of possible types of larger reforms:
- There were occasional mentions of very large capital requirements. For example, Allan Meltzer noted that at one time 25 percent capital for was common for banks. Should we be moving to such a system?
- Amir Sufi and Adair Turner talked about the features of debt contracts that make them inherently prone to instability. Should we be working aggressively to promote more indexation of debt contracts, more equity-like contracts, and so on?
- We can see the costs that the modern financial system has imposed on the real economy. It is not immediately clear that the benefits of the financial innovations of recent decades have been on a scale that warrants those costs. Thus, might a much simpler, 1960s- or 1970s-style financial system be better than what we have now?
- The fact that shocks emanating from the financial system sometimes impose large costs on the rest of the economy implies that there are negative externalities to some types of financial activities or financial structures, which suggests the possibility of Pigovian taxes.
So, should there be substantial taxes on certain aspects of the financial system? If so, what should be taxed – debt, leverage, size, other indicators of systemic risk, a combination, or something else altogether?
Larger-scale solutions on the macroeconomic side ...
After a long discussion, he concludes with:
After five years of catastrophic macroeconomic performance, “first steps and early lessons” – to quote the conference title – is not what we should be aiming for. Rather, we should be looking for solutions to the ongoing current crisis and strong measures to minimize the chances of anything similar happening again. I worry that the reforms we are focusing on are too small to do that, and that what is needed is a more fundamental rethinking of the design of our financial system and of our frameworks for macroeconomic policy.
Second, Joe Stiglitz:
In analyzing the most recent financial crisis, we can benefit somewhat from the misfortune of recent decades. The approximately 100 crises that have occurred during the last 30 years—as liberalization policies became dominant—have given us a wealth of experience and mountains of data. If we look over a 150 year period, we have an even richer data set.
With a century and half of clear, detailed information on crisis after crisis, the burning question is not How did this happen? but How did we ignore that long history, and think that we had solved the problems with the business cycle Believing that we had made big economic fluctuations a thing of the past took a remarkable amount of hubris....
In his lengthy essay, he goes on to discuss:
Markets are not stable, efficient, or self-correcting
- The models that focused on exogenous shocks simply misled us—the majority of the really big shocks come from within the economy.
- Economies are not self-correcting.
More than deleveraging, more than a balance sheet crisis: the need for structural transformation
- The fact that things have often gone badly in the aftermath of a financial crisis doesn’t mean they must go badly.
Reforms that are, at best, half-way measures
- The reforms undertaken so far have only tinkered at the edges.
- The crisis has brought home the importance of financial regulation for macroeconomic stability.
Deficiencies in reforms and in modeling
- The importance of credit
- A focus on the provision of credit has neither been at the center of policy discourse nor of the standard macro-models.
- There is also a lack of understanding of different kinds of finance.
- Flawed models not only lead to flawed policies, but also to flawed policy frameworks.
- Should monetary policy focus just on short term interest rates?
- Price versus quantitative interventions
Stiglitz ends with:
Take this chance to revolutionize flawed models
It should be clear that we could have done much more to prevent this crisis and to mitigate its effects. It should be clear too that we can do much more to prevent the next one. Still, through this conference and others like it, we are at least beginning to clearly identify the really big market failures, the big macroeconomic externalities, and the best policy interventions for achieving high growth, greater stability, and a better distribution of income.
To succeed, we must constantly remind ourselves that markets on their own are not going to solve these problems, and neither will a single intervention like short-term interest rates. Those facts have been proven time and again over the last century and a half.
And as daunting as the economic problems we now face are, acknowledging this will allow us to take advantage of the one big opportunity this period of economic trauma has afforded: namely, the chance to revolutionize our flawed models, and perhaps even exit from an interminable cycle of crises.
A New and Improved Macroeconomics, by Mark Thoma: Macroeconomics has not fared well in recent years. The failure of standard macroeconomic models during the financial crisis provided the first big blow to the profession, and the recent discovery of the errors and questionable assumptions in the work of Reinhart and Rogoff further undermined the faith that people have in our models and empirical methods.
What will it take for the macroeconomics profession to do better? ...
The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn't about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics.
Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit.
There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship. If we could do repeated experiments or compare data across countries (or other jurisdictions) without worrying about the “all else equal assumption” we’d could perhaps sort this out. It would be like repeated experiments. But, unfortunately, there are too many institutional differences and common shocks across countries to reliably treat each country as an independent, all else equal experiment. Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data.
Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions).
There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built. In this regard, when I was a new assistant professor Milton Friedman presented some work at a conference that impressed me quite a bit. He resurrected a theoretical paper he had written 25 years earlier (it was his plucking model of aggregate fluctuations), and tested it against the data that had accumulated in the time since he had published his work. It’s not really fair to test a theory against historical macroeconomic data, we all know what the data say and it would be foolish to build a model that is inconsistent with the historical data it was built to explain – of course the model will fit the data, who would be impressed by that? But a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful (Friedman’s model passed the test using only the newer data).
As a young time-series econometrician struggling with data/degrees of freedom issues I found this encouraging. So what if in 1986 – when I finished graduate school – there were only 28 quarterly observations for macro variables (112 total observations, reliable data on money, which I almost always needed, doesn’t begin until 1959). By, say, the end of 2012 there would be almost double that amount (216 versus 112!!!). Asymptotic (plim-type) results here we come! (Switching to monthly data doesn’t help much since it’s the span of the data – the distance between the beginning and the end of the sample – rather than the frequency the data are sampled that determines many of the “large-sample results”).
By today, I thought, I would have almost double the data I had back then and that would improve the precision of tests quite a bit. I could also do what Friedman did, take really important older papers that give us results “everyone knows” and see if they hold up when tested against newer data.
It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s, and because of this structural break today 1984 is a common starting point for empirical investigations (start dates can be anywhere in the 79-84 range though later dates are more common). Data before this time-period are discarded.
So, here we are 25 years or so later and macroeconomists don’t have any more data at our disposal than we did when I was in graduate school. And if the structure of the economy keeps changing – as it will – the same will probably be true 25 years from now. We will either have to model the structural change explicitly (which isn’t easy, and attempts to model structural beaks often induce as much uncertainty as clarity), or continually discard historical data as time goes on (maybe big data, digital technology, theoretical advances, etc. will help?).
The point is that for a variety of reasons – the lack of experimental data, small data sets, and important structural change foremost among them – empirical macroeconomics is not able to definitively say which competing model of the economy best explains the data. There are some questions we’ve been able to address successfully with empirical methods, e.g., there has been a big change in views about the effectiveness of monetary policy over the last few decades driven by empirical work. But for the most part empirical macro has not been able to settle important policy questions. The debate over government spending multipliers is a good example. Theoretically the multiplier can take a range of values from small to large, and even though most theoretical models in use today say that the multiplier is large in deep recessions, ultimately this is an empirical issue. I think the preponderance of the empirical evidence shows that multipliers are, in fact, relatively large in deep recessions – but you can find whatever result you like and none of the results are sufficiently definitive to make this a fully settled issue.
I used to think that the accumulation of data along with ever improving empirical techniques would eventually allow us to answer important theoretical and policy questions. I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results.
As noted below, it is a slow day, but this is well worth reading (there's quite a bit more in the original post):
The Price Is Wrong, by Paul Krugman: It’s a slow morning on the economic news front, as we wait for various euro shoes to drop, so I thought I’d share a meditation I’ve been having on the diagnosis and misdiagnosis of the Lesser Depression. ...
So, start with our big problem, which is mass unemployment. Basic supply and demand analysis says that ... prices are supposed to rise or fall to clear markets. So what’s with this apparent massive and persistent excess supply of labor? In general, market disequilibrium is a sign of prices out of whack... The big divide comes over the question of which price is wrong.
As I see it, the whole structural/classical/Austrian/supply-side/whatever side of this debate basically believes that the problem lies in the labor market. ... For some reason, they would argue, wages are too high... Some of them accept the notion that it’s because of downward nominal wage rigidity; more, I think, believe that workers are being encouraged to hold out for unsustainable wages by moocher-friendly programs like food stamps, unemployment benefits, disability insurance, and whatever.
As regular readers know, I find this prima facie absurd — it’s essentially the claim that soup kitchens caused the Great Depression. ...
So what’s the alternative view? It’s basically the notion that the interest rate is wrong — that given the overhang of debt and other factors depressing private demand, real interest rates would have to be deeply negative to match desired saving with desired investment at full employment. And real rates can’t go that negative because expected inflation is low and nominal rates can’t go below zero: we’re in a liquidity trap. ..
There are strong policy implications of these two views. If you think the problem is that wages are too high, your solution is that we need to meaner to workers — cut off their unemployment insurance, make them hungry by cutting off food stamps, so they have no alternative to do whatever it takes to get jobs, and wages fall. If you think the problem is the zero lower bound on interest rates, you think that this kind of solution wouldn’t just be cruel, it would make the economy worse, both because cutting workers’ incomes would reduce demand and because deflation would increase the burden of debt.
What my side of the debate would call for, instead, is a reduction in the real interest rate, if possible, by raising expected inflation; and failing that, more government spending to increase demand and put idle resources to work. ...
So yes, the price is wrong — but it’s a terrible, disastrous mistake to focus on the wrong wrong price.
Why should workers bear the burden of a recession they had nothing to do with causing? We should do our best to protect vulnerable workers and their families, and if it comes at the expense of those who were responsible for the boom and bust, I can live with that (and no, the cause wasn't poor people trying to buy houses -- people on the right who are afraid they will be asked to pay for their poor choices, or who want to pursue an anti-government, do not help the unfortunate with my hard-earned investment income agenda have tried to make this claim, and they are still at it, but it is "prima facie absurd").
Watching John Williams give this paper:
Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates, by Eric T. Swanson and John C. Williams, Federal Reserve Bank of San Francisco, January 2013: Abstract The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to many macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, standard macroeconomic theory also implies that private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for the economy, and it is unclear to what extent those yields have been constrained. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by estimating the time-varying high-frequency sensitivity of those interest rates to macroeconomic announcements relative to a benchmark period in which the zero bound was not a concern. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.
There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":
Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.
Data, Stimulus, and Human Nature, by Paul Krugman: David Brooks writes about the limitations of Big Data, and makes some good points. But he goes astray, I think, when he touches on a subject near and dear to my own data-driven heart:
For example, we’ve had huge debates over the best economic stimulus, with mountains of data, and as far as I know not a single major player in this debate has been persuaded by data to switch sides.
Actually, he’s not quite right there, as I’ll explain in a minute. But it’s certainly true that neither stimulus advocates nor hard-line stimulus opponents have changed their positions. The question is, does this say something about the limits of data — or is it just a commentary on human nature, especially in a highly politicized environment?
For the truth is that there were some clear and very different predictions from each side of the debate... On these predictions, the data have spoken clearly; the problem is that people don’t want to hear..., and the fact that they don’t happen has nothing to do with the limitations of data. ...
That said, if you look at players in the macro debate who would not face huge personal and/or political penalties for admitting that they were wrong, you actually do see data having a considerable impact. Most notably, the IMF has responded to the actual experience of austerity by conceding that it was probably underestimating fiscal multipliers by a factor of about 3.
So yes, it has been disappointing to see so many people sticking to their positions on fiscal policy despite overwhelming evidence that those positions are wrong. But the fault lies not in our data, but in ourselves.
I'll just add that when it comes to the debate over the multiplier and the macroeconomic data used to try to settle the question, the term "Big Data" doesn't really apply. If we actually had "Big Data," we might be able to get somewhere but as it stands -- with so little data and so few relevant historical episodes with similar policies -- precise answers are difficult to ascertain. And it's even worse than that. Let me point to something David Card said in an interview I posted yesterday:
I think many people are concerned that much of the research they see is biased and has a specific agenda in mind. Some of that concern arises because of the open-ended nature of economic research. To get results, people often have to make assumptions or tweak the data a little bit here or there, and if somebody has an agenda, they can inevitably push the results in one direction or another. Given that, I think that people have a legitimate concern about researchers who are essentially conducting advocacy work.
If we had the "Big Data" we need to answer these questions, this would be less of a problem. But with quarterly data from 1960 (when money data starts, you can go back to 1947 otherwise), or since 1982 (to avoid big structural changes and changes in Fed operating procedures), or even monthly data (if you don't need variables like GDP), there isn't as much precision as needed to resolve these questions (50 years of quarterly data is only 200 observations). There is also a lot of freedom to steer the results in a particular direction and we have to rely upon the integrity of researchers to avoid pushing a particular agenda. Most play it straight up, the answers are however they come out, but there are enough voices with agendas -- particularly, though not excusively, from think tanks, etc. -- to cloud the issues and make it difficult for the public to separate the honest work from the agenda based, one-sided, sometimes dishonest presentations. And there are also the issues noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out "right," etc., because the data doesn't provide enough clarity to force them to give up their beliefs (in which they've invested considerable effort).
So I wish we had "Big Data," and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you can't always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).
Despite a few research attempts such as the billion price project, "Little Data" and all the problems that come with it is a better description of empirical macroeconomics.
Ed Phelps does not like rational expectations:
Expecting the Unexpected: An Interview With Edmund Phelps, by Caroline Baum, Commentary, Bloomberg: ...I talked with [Edmund Phelps] ... about his views on rational expectations...
Q: So how did adaptive expectations morph into rational expectations?
A: The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. ...
Q: And what's the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. ... Roman Frydman has made his career uncovering the impossibility of rational expectations in several contexts. ...
When I was getting into economics in the 1950s, we understood there could be times when a craze would drive stock prices very high. Or the reverse... But now that way of thinking is regarded by the rational expectations advocates as unscientific.
By the early 2000s, Chicago and MIT were saying we've licked inflation and put an end to unhealthy fluctuations –- only the healthy “vibrations” in rational expectations models remained. Prices are scientifically determined, they said. Expectations are right and therefore can't cause any mischief.
At a celebration in Boston for Paul Samuelson in 2004 or so, I had to listen to Ben Bernanke and Oliver Blanchard ... crowing that they had conquered the business cycle of old by introducing predictability in monetary policy making, which made it possible for the public to stop generating baseless swings in their expectations and adopt rational expectations...
Q: And how has that worked out?
A: Not well! ...[There's more in the full interview.]
Simon Wren-Lewis argues that the "crisis view" of change in macroeconomic theory is too simple
Misinterpreting the history of macroeconomic thought, mainly macro: An attractive way to give a broad sweep over the history of macroeconomic ideas is to talk about a series of reactions to crises (see Matthew Klein and Noah Smith). However it is too simple, and misleads as a result. The Great Depression led to Keynesian economics. So far so good. The inflation of the 1970s led to ? Monetarism - well maybe in terms of a few brief policy experiments in the early 1980s, but Monetarist-Keynesian debates were going strong before the 1970s. The New Classical revolution? Well rational expectations can be helpful in adapting the Phillips curve to explain what happened in the 1970s, but I’m not sure that was the main reason why the idea was so rapidly adopted. The New Classical revolution was much more than rational expectations.
The attempt gets really off beam if we try and suggest that the rise of RBC models was a response to the inflation of the 1970s. I guess you could argue that the policy failures of the 1970s were an example of the Lucas critique, and that to avoid similar mistakes macroeconomists needed to develop microfounded models. But if explaining the last crisis really was the prime motivation, would you develop models in which there was no Phillips curve, and which made no attempt to explain the inflation of the 1970s (or indeed, the previous crisis - the Great Depression)?
What the ‘macroeconomic ideas develop as a response to crises’ story leaves out is the rest of economics, and ideology. The Keynesian revolution (by which I mean macroeconomics after the second world war) can be seen as a methodological revolution. Models were informed by theory, but their equations were built to explain the data. Time series econometrics played an essential role. However this appeared to be different from how other areas of the discipline worked. In these other areas of economics, explaining behavior in terms of optimization by individual agents was all important. This created a tension, and a major divide within economics as a whole. Macro appeared quite different from micro.
A particular manifestation of this was the constant question: where is the source of the market failure that gives rise to the business cycle. Most macroeconomists replied sticky prices, but this prompted the follow up question: why do rational firms or workers choose not to change their prices? The way most macroeconomists at the time chose to answer this was that expectations were slow to adjust. It was a disastrous choice, but I suspect one that had very little to do with the nature of Keynesian theory, and rather more to do with the analytical convenience of adaptive expectations. Anyhow, that is another story.
The New Classical revolution was in part a response to that tension. In methodological terms it was a counter revolution, trying to take macroeconomics away from the econometricians, and bring it back to something microeconomists could understand. Of course it could point to policy in the 1970s as justification, but I doubt that was the driving force. I also think it is difficult to fully understand the New Classical revolution, and the development of RBC models, without adding in some ideology.
Does this have anything to tell us about how macroeconomics will respond to the Great Recession? I think it does. If you bought the ‘responding to the last crisis’ narrative, you would expect to see some sea change, akin to Keynesian economics or the New Classical revolution. I suspect you would be disappointed. While I see plenty of financial frictions being added to DSGE models, I do not see any significant body of macroeconomists wanting to ply their trade in a radically different way. If this crisis is going to generate a new revolution in macroeconomics, where are the revolutionaries? However, if you read the history of macro thought the way I do, then macro crises are neither necessary nor sufficient for revolutions in macro thought. Perhaps there was only one real revolution, and we have been adjusting to the tensions that created ever since.
Let me follow up on the ideological point with an example. Prior to the New Classical revolution in the 1970s (which, contra some recent descriptions, is different from DSGE models), the people who do not believe that government intervention is bad had a problem. It was very clear in the data that there was a positive correlation between changes in the money supply and changes in employment and real income. Further, though this is harder to establish, the relationship appeared causal. Money causes income, and this allowed government to stabilize the economy.
The (neo)classical model, with its vertical AS curve, could not explain the positive money-income correlation in the data. In the typical classical formulation, so long as prices are perfectly flexible and all markets clear at all points in time, the economy is always in long-run equilibrium. Thus, in these models the prediction is a zero correlation between money and income. But it wasn't zero.
However, a very clever idea from Robert Lucas in the 1970s allowed this correlation to be explained without admitting government can do good, i.e. without admitting that government can stabilize the economy using monetary policy. This is the ideological part -- a way to explain the data without acknowledging a role for government at the same time. I can't say that Lucas approached the problem in this way, i.e. that he started out with the ideological goal of explaining the money-income correlation without allowing a role for government. Maybe it arose in a flash of brilliance completely unconnected to ideological concerns, But I find it hard to explain why this model came about in the form it did without ideology, and the view of government the New Classical model supported surely didn't hurt its acceptance at places like the University of Chicago (as it existed then).
From an interview of MIT's Andrew Lo:
Q: Many people believe that the financial crisis revealed major shortcomings in the discipline of economics, and one of the goals of your book is to consider what economic theory tells us about the links between finance and the rest of the economy. Do you feel that economists understand enough about the nature of financial instability or liquidity crises?
A: I think that the financial crisis was an important wake-up call to all economists that we need to change the way we approach our discipline. While economics has made great strides in modeling liquidity risk, financial contagion, and market bubbles and crashes, we haven't done a very good job of integrating these models into broader macroeconomic policy tools. That's the focus of a lot of recent activity in macro and financial economics and the hope is that we'll be able to do better in the near future.
Q: Let me continue briefly on this thread. One topic has been particularly controversial concerns the efficient-market hypothesis (EMH). Burton Malkiel discusses the issue in his chapter in Rethinking the Financial Crisis, but I wanted to ask your opinion of this idea that EMH fed a hands-off regulatory approach that ignored concerns about faulty asset pricing.
A: There's no doubt that EMH and its macroeconomic cousin, Rational Expectations, played a significant role in how regulators approached their responsibilities. However, we should keep in mind that market efficiency isn't wrong; it's just incomplete. Market participants do behave rationally under normal economic conditions, hence the current regulatory framework does serve a useful purpose during these periods. But during periods of extreme growth and decline, human behavior is not the same, and much of economic theory and regulatory policy does not yet reflect this new perspective of "Adaptive Markets."
Something to think about:
The reason we lose at games, EurekAlert: Writing in PNAS, a University of Manchester physicist has discovered that some games are simply impossible to fully learn, or too complex for the human mind to understand.
Dr Tobias Galla from The University of Manchester and Professor Doyne Farmer from Oxford University and the Santa Fe Institute, ran thousands of simulations of two-player games to see how human behavior affects their decision-making.
In simple games with a small number of moves, such as Noughts and Crosses the optimal strategy is easy to guess, and the game quickly becomes uninteresting.
However, when games became more complex and when there are a lot of moves, such as in chess, the board game Go or complex card games, the academics argue that players' actions become less rational and that it is hard to find optimal strategies.
This research could also have implications for the financial markets. Many economists base financial predictions of the stock market on equilibrium theory – assuming that traders are infinitely intelligent and rational.
This, the academics argue, is rarely the case and could lead to predictions of how markets react being wildly inaccurate.
Much of traditional game theory, the basis for strategic decision-making, is based on the equilibrium point – players or workers having a deep and perfect knowledge of what they are doing and of what their opponents are doing.
Dr Galla, from the School of Physics and Astronomy, said: "Equilibrium is not always the right thing you should look for in a game."
"In many situations, people do not play equilibrium strategies, instead what they do can look like random or chaotic for a variety of reasons, so it is not always appropriate to base predictions on the equilibrium model."
"With trading on the stock market, for example, you can have thousands of different stock to choose from, and people do not always behave rationally in these situations or they do not have sufficient information to act rationally. This can have a profound effect on how the markets react."
"It could be that we need to drop these conventional game theories and instead use new approaches to predict how people might behave."
Together with a Manchester-based PhD student the pair are looking to expand their study to multi-player games and to cases in which the game itself changes with time, which would be a closer analogy of how financial markets operate.
Preliminary results suggest that as the number of players increases, the chances that equilibrium is reached decrease. Thus for complicated games with many players, such as financial markets, equilibrium is even less likely to be the full story.
Who should be blamed for the slow recovery?:
The Big Fail by Paul Krugman, Commentary, NY Times: It’s that time again: the annual meeting of the American Economic Association and affiliates... And this year, as in past meetings, there is one theme dominating discussion: the ongoing economic crisis.
This isn’t how things were supposed to be. If you had polled the economists attending this meeting three years ago, most of them would surely have predicted that by now we’d be talking about how the great slump ended, not why it still continues.
So what went wrong? The answer, mainly, is the triumph of bad ideas.
It’s tempting to argue that the economic failures of recent years prove that economists don’t have the answers. But the truth is ... standard economics offered good answers, but political leaders — and all too many economists — chose to forget or ignore what they should have known. ...
A smaller financial shock, like the dot-com bust at the end of the 1990s, can be met by cutting interest rates. But the crisis of 2008 was far bigger, and even cutting rates all the way to zero wasn’t nearly enough.
At that point governments needed to step in, spending to support their economies while the private sector regained its balance. And to some extent that did happen... Budget deficits rose, but this was actually a good thing, probably the most important reason we didn’t have a full replay of the Great Depression.
But it all went wrong in 2010. The crisis in Greece was taken, wrongly, as a sign that all governments had better slash spending and deficits right away. Austerity became the order of the day...
Of the papers presented at this meeting, probably the biggest flash came from one by Olivier Blanchard and Daniel Leigh of the International Monetary Fund. ... For what the paper concludes is not just that austerity has a depressing effect on weak economies, but that the adverse effect is much stronger than previously believed. The premature turn to austerity, it turns out, was a terrible mistake. ...
The really bad news is ... European leaders ... still insist that the answer is even more pain. ... And here in America, Republicans insist that they’ll use a confrontation over the debt ceiling ... to demand spending cuts that would drive us back into recession.
The truth is that we’ve just experienced a colossal failure of economic policy — and far too many of those responsible for that failure both retain power and refuse to learn from experience.
I spent quite a bit of time with Noah Smith at the ASSA meetings. At one point, we were at the St. Louis Fed reception and -- since he has no fear -- I suggested that he tell Randall Wright how well New Keynesian models work, which he did. I assumed he'd get a strong taste of the divide in macroeconomics:
Is economics divided into warring ideological camps?, by Noah Smith: This week I went to the American Economic Association's annual meeting, which was held in sunny San Diego, CA. I went to quite a number of interesting sessions, mostly on behavioral economics and finance. What an exciting field!
But anyway, I also went to an interesting session called "What do economists think about major public policy issues?" There were two papers presented, both of which were extremely relevant for much of the debate going on in the econ blogosphere.
The first paper, by Roger Gordon and Gordon Dahl of UC San Diego (aside: now I want to co-author with a guy whose last name is "Noah"!), was called "Views among Economists: Professional Consensus or Point-Counterpoint?" Gordon & Dahl surveyed 41 top economists about their views on 81 policy issues, and tried to determine A) how much disagreement there was, and B) how much disagreement was due to political ideology.
They found that top economists agree about a lot of things. ... On some other issues, opinion was all over the place. Gordon and Dahl also found that the differences that did exist couldn't easily be tied to individual characteristics like gender, experience working in Washington, etc. A panel discussant, Monika Piazzesi, did some further statistical analysis to show that the surveyed economists didn't clump up into "liberal" and "conservative" clusters.
Conclusion: Economics, at least at the elite level, isn't divided into two warring ideological camps.
That doesn't mean there is no politicization. Justin Wolfers ... ranked the 41 top economists on a liberal/conservative scale according to his own intuition, and found that the economists he intuitively felt were liberal were more likely to support fiscal stimulus, and the conservatives less. He found a few other seemingly partisan differences this way, though not many. (Of course, one has to be careful with this type of analysis; if your ideas of who's "liberal" and who's "conservative" are formed by who supports stimulus and who opposes it, then of course you're going to see this type of effect!)
And of course, it's worth noting that the survey had a small sample, and included only "top" economists at major U.S. universities. There might be "long tails" of ideological bias lower down the prestige scale.
Paul Krugman, who was on the panel, suggested that politicization is mostly confined to the macro field. But even on the question of stimulus, most of the surveyed economists (80%) agreed that Obama's 2009 stimulus boosted output and employment (though fewer agreed that this boost was worth the long-term costs). So it seems that the few top economists who a few years ago were loudly saying that stimulus couldn't possibly work - Bob Lucas, Robert Barro, Gene Fama, etc. - were just a very vocal small minority.
These results surprised me. I'm so used to seeing top macroeconomists tangling with each other... And I had often heard that the appeal of certain classes of macro models - for example, RBC - came from their conservative policy implications.
So maybe I've been wrong all this time! Or maybe there was more politicization of macro back in the 70s and 80s?
Or maybe there is still politicization, but the economics profession has just shifted decisively to the center-left? After all, as of 2012, the consensus favorite modeling approach among pure macro people seems to be New Keynesian models of the type preferred by Krugman, not RBC-type models of the type supported by Bob Lucas, Robert Barro, and other "new classical" economists back in the 1980s. It could be that nowadays most economists are - as one person on the panel put it - "market-hugging Democrats". (Or it could be that New Keynesian models simply won the war of ideas. Or both.)
I'm not sure, but Gordon & Dahl's paper is definitely making me question my beliefs...
Simon Wren-Lewis continues the conversation on the state of academic macroeconomics:
Is academic macroeconomics flourishing?, by Simon Wren-Lewis: How do you judge the health of an academic discipline? Is macroeconomics rotten or flourishing? ...[A]cademic macroeconomics appears all over the place, with strong disputes between alternative schools.
Is this because the evidence in macroeconomics is so unclear that it becomes very difficult to judge different theories? I think the inexact nature of economics is a necessary condition for the lack of an academic consensus in macro, but it is not sufficient. (Mark Thoma has a recent post on this.) Consider monetary policy. I would argue that we have made great progress in both the analysis and practice of monetary policy over the last forty years. One important reason for that progress is the existence of a group that is often neglected - macroeconomists working in central banks.
Unlike their academic counterparts, the primary goal of these economists is not to innovate, but to examine the evidence and see what ideas work. The framework that most of these economists find most helpful is the New NeoClassical Synthesis, or equivalently New Keynesian theory. As a result, it has become the dominant paradigm in analyzing monetary policy.
That does not mean that every macroeconomist looking at monetary policy has to be a New Keynesian, or that central banks ignore other approaches. It is important that this policy consensus should be continually questioned, and part of a healthy academic discipline is that the received wisdom is challenged. However, it has to be acknowledged that policymakers who look at the evidence day in and day out believe that New Keynesian theory is the most useful framework currently around. I have no problem with academics saying ‘I know this is the consensus, but I think it is wrong’. However to say ‘the jury is still out' on whether prices are sticky is wrong. The relevant jury came to a verdict long ago.
It is obvious that when it comes to using fiscal policy in short term macroeconomic stabilization there can be no equivalent claim to progress or consensus. The policy debates we have today do not seem to have advanced much since when Keynes was alive. From one perspective this contrast is deeply puzzling. The science of fiscal policy is not inherently more complicated. ...
What has been missing with fiscal policy has been the equivalent of central bank economists whose job depends on taking an objective view of the evidence and doing the best they can with the ideas that academic macroeconomics provides. This group does not exist because the need to use fiscal policy for short term macroeconomic stabilization is occasional either in terms of time (when the Zero Lower Bound applies) or space (countries within the Eurozone). As a result, when fiscal policy was required to perform a stabilization role, policymakers had to rely on the academic community for advice, and here macroeconomics clearly failed. Pretty well any outside observer would describe its performance as rotten.
The contrast between monetary and fiscal policy tells us that this failure is not an inevitable result of the paucity of evidence in macroeconomics. I think it has a lot more to do with the influence of ideology, and the importance of what I have called the anti-Keynesian school that is a legacy of the New Classical revolution. The reasons why these influences are particularly strong when it comes to fiscal policy are fairly straightforward.
Two issues remain unclear for me. The first is how extensive this ideological bias is. Is the over dominance of the microfoundations approach related to the fact that different takes on the evidence have an unfortunate Keynesian bias? Second, is the degree of ideological bias in macro generic, or is it in part contingent on the particular historical circumstances of the New Classical revolution? These questions are important in thinking how this bias can be overcome.
When people ask if evidence matters in economics, I often point to the debate over the New Classical model's prediction that only unexpected changes in monetary policy matter for economic activity. These models, with their prediction that expected changes in monetary policy are neutral, cleverly allowed New Classical economists to explain the correlations between money, output, and prices in the data without admitting that systematic policy mattered. Thus, these models supported the ideological convictions of many on the right -- government intervention can make things worse, but not better. (Unexpected policy shocks push the economy away from the optimal outcome, so the key was to minimize unexpected policy shocks. This led to things like the push for transparency so that people would anticipate, as much as possible, actual policy moves.)
At first, the evidence seemed to support these models (e.g. Barro's empirical work), but as the evidence accumulated it eventually became clear that this prediction was wrong. Mishkin provided key evidence against these models through his academic work (see, for example, his book A Rational Expectations Approach to Macroeconometrics: Testing Policy Ineffectiveness and Efficient-Markets Models), so I am not as convinced as Simon Wren-Lewis that the difference between monetary and fiscal policy is due solely to the existence of technocratic, mostly non-ideological central bank economists letting the evidence take them where it may. That certainly mattered, but is seems there was more to it than this.
The evidence that Mishkin and others provided was a key reason these models were rejected (it was also difficult to simultaneously explain the magnitude and duration of business cycles with unexpected monetary shocks as the sole driving force), but when it comes to fiscal policy, as noted above, evidence has not trumped ideology to the same degree. One of the reasons for this, I think, is that it's difficult to find clear fiscal policy experiments in the data to evaluate. And when we do (e.g. wars), it's difficult to know if the results will hold at other times. But I can't really disagree with the hypothesis that if an institution like the Fed existed for fiscal policy, there would be a much bigger demand for this information, and that demand would have produced a much larger supply of evidence.
But I am not so sure the difference is "central bank economists whose job depends on taking an objective view of the evidence" so much as it is that these institutions produce a demand for this type of research, and academics respond by supplying the information that central banks need. So the question for me is whether it's the lack of ideology of central bank economists (many of whom are academics), or the fact that their existence creates a large demand for this type of information. Maybe it's both.
One of the big, current, passionate debates within monetary policy is the relative effectiveness of Taylor Rules versus nominal GDP targeting (e.g. see here). Which of the two does a better job of stabilizing the economy?
If you want to argue against nominal GDP targeting, David Altig of the Atlanta Fed has some ammunition for you. Here's his conclusion:
Nominal GDP Targeting: Still a Skeptic, macroblog: ... To summarize my concerns, the Achilles' heel of nominal GDP targeting is that it provides a poor nominal anchor in an environment in which there is great uncertainty about the path of potential real GDP. As I noted in my earlier post, there is historical justification for that concern.
Basically, anyone puzzling through how demographics are affecting labor force participation rates, how technology is changing the dynamics of job creation, or how policy might be altering labor supply should feel some humility about where potential GDP is headed. For me, a lack of confidence in the path of real GDP takes a lot of luster out of the idea of a nominal GDP target.
Taylor rule skeptics can turn to David Andolfatto of the St. Louis Fed:
On the perils of Taylor rules. macromania: In the Seven Faces of "The Peril" (2010), St. Louis Fed president Jim Bullard speculated on the prospect of the U.S. falling into a Japanese-style deflationary outcome. His analysis was built on an insight of Benhabib, Schmitt-Grohe, and Uribe (2001) in The Perils of Taylor Rules.
These authors (BSU) showed that if monetary policy is conducted according to a Taylor rule, and if there is a zero lower bound (ZLB) on the nominal interest rate, then there are generally two steady-state equilibria. In one equilibrium--the "intended" outcome--the nominal interest rate and inflation rate are on target. In the other equilibrium--the "unintended" outcome--the nominal interest rate and inflation rate are below target--the economy is in a "liquidity trap."
As BSU stress, the multiplicity of outcomes occurs even in economies where prices are perfectly flexible. All that is required are three (non-controversial) ingredients:  a Fisher equation;  a Taylor rule; and  a ZLB.
Back in 2010, I didn't take this argument very seriously. In part it was because the so-called "unintended" outcome was more efficient than than the "intended" outcome (at least, in the version of the model with flexible prices). To put things another way, the Friedman rule turns out to be good policy in a wide class of models. I figured that other factors were probably more important for explaining the events unfolding at that time.
Well, maybe I was a bit too hasty. Let me share with you my tinkering with a simple OLG model... Unfortunately, what follows is a bit on the wonkish side...
[My comments on this topic are highlighted in the first link, i.e. the one to David Altig's post at macroblog.]
Kevin Drum wonders if macroeconomists will ever be able to agree:
The part I can't figure out is why there's so much contention even within the field. In physics and climate science, the cranks are almost all nonspecialists with an axe to grind. Actual practitioners agree pretty broadly on at least the basics. But in macroeconomics you don't have that. There are still polar disagreements among top names on some of the most basic questions. Even given the complexity of the field, that's a bit of a mystery. It's understandable that economics is a more politicized field than physics, but in practice it seems to be almost 100 percent politicized, with the battles fought out by streams of Greek letters demonstrating, as Matt says, just about anything. I wonder if this is ever likely to change? Or will changes in the real world always outpace our ability to build consensus on how the economy actually works?
I took a shot at answering this in April 2011:
... Why can’t economists tell us what happens when government spending goes up or down, taxes change, or the Fed changes monetary policy? The stumbling block is that economics is fundamentally a non-experimental science, particularly in the realm of macroeconomics. Unlike disciplines such as physics, we can't go into the laboratory and rerun the economy again and again under different conditions to measure, say, the average effect of monetary and fiscal policy. We only have one realization of the macroeconomy to use to answer important policy questions, and that limits the precision of the answers we can give. In addition, because the data are historical rather than experimental, we cannot look at the relationships among a set of variables in isolation while holding all the other variables constant as you might do in a lab and this also reduces the precision of our estimates.
Because we only have a single realization of history rather than laboratory data to investigate economic issues, macroeconomic theorists have full knowledge of past data as they build their models. It would be a waste of time to build a model that doesn't fit this one realization of the macroeconomy, and fit it well, and that is precisely what has been done. Unfortunately, there are two models that fit the data, and the two models have vastly different implications for monetary and fiscal policy. ... [This leads to passionate debates about which model is best.]
But even if we had perfect models and perfect data, there would still be uncertainties and disagreements over the proper course of policy. Economists are hindered by the fact that people and institutions change over time in a way that the laws of physics do not. Thus, even if we had the ability to do controlled and careful experiments, there is no guarantee that what we learn would remain valid in the future.
Suppose that we somehow overcome every one of these problems. Even then, disagreements about economic policy would persist in the political arena. Even with full knowledge about how, say, a change in government spending financed by a tax increase will affect the economy now and in the future, ideological differences across individuals will lead to different views on the net social value of these policies. Those on the left tend to value the benefits higher, and place less weight on the costs than those on the right and this leads to fundamental, insoluble differences over the course of economic policy. ...
Progress in economics may someday narrow the partisan divide over economic policy, but even perfect knowledge about the economy won’t eliminate the ideological differences that are the source of so much passion in our political discourse.
A follow-up post in February empahsizes the point that it is not at all clear that the strong divides in economics can be settled with data, but it's not completely hopeless:
...the ability to choose one model over the other is not quite as hopeless as I’ve implied. New data and recent events like the Great Recession push these models into unchartered territory and provide a way to assess which model provides better predictions. However, because of our reliance on historical data this is a slow process – we have to wait for data to accumulate – and there’s no guarantee that once we are finally able to pit one model against the other we will be able to crown a winner. Both models could fail...
I think the Great recession has, for example, provided evidence that the NK model provides a better explanation of events than its competitors, but it is far from a satisfactory construction and it would be hard to call its forecasting and explanatory abilities a success.
Here's another post from the past (Sept. 2009) on this topic:
... There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available (as to how far this kind of "eclecticism" will get you in academia, I'll just note that this is exactly the advice Mishkin gives in his textbook on monetary theory and policy).
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price rigidities of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the thorough analysis that is needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could, and the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.
Part of the disagreement is over the ability of this approach -- using an older model guided by newer insights (e.g. that expectations of future output matter for the "IS curve") -- to deliver reliable answers and policy prescriptions.
More on this from another past post (March 2009):
Models are built to answer questions, and the models economists have been using do, in fact, help us find answers to some important questions. But the models were not very good (at all) at answering the questions that are important right now. They have been largely stripped of their usefulness for actual policy in a world where markets simply break down.
The reason is that in order to get to mathematical forms that can be solved, the models had to be simplified. And when they are simplified, something must be sacrificed. So what do you sacrifice? Hopefully, it is the ability to answer questions that are the least important, so the modeling choices that are made reveal what the modelers though was most and least important.
The models we built were very useful for asking whether the federal funds rate should go up or down a quarter point when the economy was hovering in the neighborhood of full employment ,or when we found ourselves in mild, "normal" recessions. The models could tell us what type of monetary policy rule is best for stabilizing the economy. But the models had almost nothing to say about a world where markets melt down, where prices depart from fundamentals, or when markets are incomplete. When this crisis hit, I looked into our tool bag of models and policy recommendations and came up empty for the most part. It was disappointing. There was really no choice but to go back to older Keynesian style models for insight.
The reason the Keynesian model is finding new life is that it specifically built to answer the questions that are important at the moment. The theorists who built modern macro models, those largely in control of where the profession has spent its effort in recent decades,; did not even envision that this could happen, let alone build it into their models. Markets work, they don't break down, so why waste time thinking about those possibilities.
So it's not the math, the modeling choices that were made and the inevitable sacrifices to reality that entails reflected the importance those making the choices gave to various questions. We weren't forced to this end by the mathematics, we asked the wrong questions and built the wrong models.
New Keynesians have been trying to answer: Can we, using equilibrium models with rational agents and complete markets, add frictions to the model - e.g. sluggish wage and price adjustment - you'll see this called "Calvo pricing" - in a way that allows us to approximate the actual movements in key macroeconomic variables of the last 40 or 50 years.
Real Business Cycle theorists also use equilibrium models with rational agents and complete markets, and they look at whether supply-side shocks such as shocks to productivity or labor supply can, by themselves, explain movements in the economy. They largely reject demand-side explanations for movements in macro variables.
The fight - and main question in academics - has been about what drives macroeconomic variables in normal times, demand-side shocks (monetary policy, fiscal policy, investment, net exports) or supply-side shocks (productivity, labor supply). And it's been a fairly brutal fight at times - you've seen some of that come out during the current policy debate. That debate within the profession has dictated the research agenda.
What happens in non-normal times, i.e. when markets break down, or when markets are not complete, agents are not rational, etc., was far down the agenda of important questions, partly because those in control of the journals, those who largely dictated the direction of research, did not think those questions were very important (some don't even believe that policy can help the economy, so why put effort into studying it?).
I think that the current crisis has dealt a bigger blow to macroeconomic theory and modeling than many of us realize.
Here's yet another past post (August 2009) on the general topic of the usefulness of macroeconomic models, though I'm not quite as bullish on the ability of existing models to provide guidance as I was when I wrote this. The point is that although many people use forecasting ability as a metric to measure the usefulness of models (because where the economy is headed is the most improtant question to them), that's not the only use of these models:
Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. ...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur.
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place. ...
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.
... To be sure, the upswing in house prices in many markets around the country in the 2000s did reach levels that history and the subsequent long downswings tell us were excessive. But, as we show in Part II, such excessive fluctuations should not be interpreted to mean that asset-price swings are unrelated to fundamental factors. In fact, even if an individual is interested only in short-term returns—a feature of much trading in many markets—the use of data on fundamental factors to forecast these returns is extremely valuable. And the evidence that news concerning a wide array of fundamentals plays a key role in driving asset-price swings is overwhelming.
Missing the Point in the Economists’ Debate
Economists concluded that fundamentals do not matter for asset-price movements because they could not find one overarching relationship that could account for long swings in asset prices. The constraint that economists should consider only fully predetermined accounts of outcomes has led many to presume that some or all participants are irrational, in the sense that they ignore fundamentals altogether. Their decisions are thought to be driven purely by psychological considerations.
The belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.
For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.
There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors, differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.
Behavioral economics provides a case in point. After uncovering massive evidence that the contemporary economics’ standard of rationality fails to capture adequately how individuals actually make decisions, the only sensible conclusion to draw was that this standard was utterly wrong. Instead, behavioral economists, applying a variant of Brecht’s dictum, concluded that individuals are irrational.
To justify that conclusion, behavioral economists and nonacademic commentators argued that the standard of rationality based on the Rational Expectations Hypothesis works—but only for truly intelligent investors. Most individuals lack the abilities needed to understand the future and correctly compute the consequences of their decisions.
In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever (for further discussion, see Chapters 3 and 4). Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.
The Distorted Language of Economic Discourse
It is often remarked that the problem with economics is its reliance on mathematical apparatus. But our criticism is not focused on economists’ use of mathematics. Instead, we criticize contemporary portrayal of the market economy as a mechanical system. Its scientific pretense and the claim that its conclusions follow as a matter of straightforward logic have made informed public discussion of various policy options almost impossible.
Doubters have often been made to seem as unreasonable as those who deny the theory of evolution or that the earth is round. Indeed, public debate is further distorted by the fact that economists formalize notions like “rationality” or “rational markets” in ways that have little or no connection to how non-economists understand these terms. When economists invoke rationality to present or legitimize their public-policy recommendations, non-economists interpret such statements as implying reasonable behavior by real people. In fact, as we discuss extensively in this book, economists’ formalization of rationality portrays obviously irrational behavior in the context of real-world markets.
Such inversions of meaning have had a profound impact on the development of economics itself. For example, having embraced the fully predetermined notion of rationality, behavioral economists proceeded to search for reasons, mostly in psychological research and brain studies, to explain why individual behavior is so grossly inconsistent with that notion—a notion that had no connection with reasonable real-world behavior in the first place.
Moreover, as we shall see, the idea that economists can provide an overarching account of markets, which has given rise to fully predetermined rationality, misses what markets really do. ...
16 See Chapters 7-9 for an extensive discussion of the role of fundamentals in driving price swings in asset markets and their interactions with psychological factors.
17 For example, in discussing the importance of the connection between the financial system and the wider economy for understanding the crisis and thinking about reform, Krugman endorses the approach taken by Bernanke and Gertler. (For an overview of these models, see Bernanke et al., 1999.) However, as pioneering as these models are in incorporating the financial sector into macroeconomics, they are fully predetermined and based on the Rational Expectations Hypothesis. As such, they suffer from the same fundamental flaws that plague other contemporary models. When used to analyze policy options, these models presume not only that the effects of contemplated policies can be fully pre-specified by a policymaker, but also that nothing else genuinely new will ever happen. Supposedly, market participants respond to policy changes according to the REH-based forecasting rules. See footnote 3 in the Introduction and Chapter 2 for further discussion.
18 The convergence in contemporary macroeconomics has become so striking that by now the leading advocates of both the “freshwater” New Classical approach and the “saltwater” New Keynesian approach, regardless of their other differences, extol the virtues of using the Rational Expectations Hypothesis in constructing contemporary models. See Prescott (2006) and Blanchard (2009). It is also widely believed that reliance on the Rational Expectations Hypothesis makes New Keynesian models particularly useful for policy analysis by central banks. See footnote 7 in this chapter and Sims (2010). For further discussion, see Frydman and Goldberg (2008).
19 Following the East German government’s brutal repression of a worker uprising in 1953, Bertolt Brecht famously remarked, “Wouldn’t it be easier to dissolve the people and elect another in their place?”
20 Even Simon (1971), a forceful early critic of economists’ notion of rationality, regarded it as an appropriate standard of decision-making, though he believed that it was unattainable for most people for various cognitive and other reasons. To underscore this view, he coined the term “bounded rationality” to refer to departures from the supposedly normative benchmark.
The introduction to this book might also be of interest:
Rethinking Expectations: The Way Forward for Macroeconomics, Edited by Roman Frydman & Edmund S. Phelps [with entries by Philippe Aghion, Sheila Dow, George W. Evans, Roger E. A. Farmer, Roman Frydman, Michael D. Goldberg, Roger Guesnerie, Seppo Honkapohja, Katarina Juselius, Enisse Kharroubi, Blake LeBaron, Edmund S. Phelps, John B. Taylor, Michael Woodford, and Gylfi Zoega ].
The introduction is here: Which Way Forward for Macroeconomics and Policy Analysis?.
Paul Krugman, quoted below, started this off (or perhaps better, continued an older discussion) by claiming the state of macro is rotten. Steve Williamson, also quoted below, replied and this is Simon Wren-Lewis' reply to Williamson (remember that, as Simon Wren-Lewis notes below, he has defended the modern approach to macro).
This pretty well covers my views, and I think this part of the Wren-Lewis rebuttal gets at the heart of the issue: "You would not think of suggesting that Paul Krugman is out of touch unless you are in effect dismissing or marginalizing this whole line of research." I am also very much in agreement with the "two unhelpful biases" he notes in the last paragraph, and have been thinking of writing more about the first, "too much of an obsession with microfoundation purity, and too little interest in evidence," particularly the lack of interest in using empirical evidence to test and reject models. (Though there are ways to get around this problem, it may be that such tests have fallen out of favor in macro since we only have historical data to work with, and it's folly to build a model with knowledge of the data and then test to see if the model fits. Of course it will fit, or at least it should. That would explain why there appears to be a greater reliance upon logic, intuition, and consistency with micro foundations than in the past. It seems like today models are more likely to be rejected for lack of internal theoretical consistency than for lack of consistency with the empirical evidence):
Paul Krugman: The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodgedThe cult here is freshwater macro, which descends from the New Classical revolution. In responseSteve Williamson: “At the time, this revolution was widely-misperceived as a fundamentally conservative movement. It was actually a nerd revolution.” “What these people had on their side were mathematics, econometrics, and most of all the power of economic theory. There was nothing weird about what these nerds were doing - they were simply applying received theory to problems in macroeconomics. Why could that be thought of as offensive?”The New Classical revolution was clearly anti-Keynesian..., but was that simply because Keynesian theory was the dominant paradigm? ...
I certainly think that New Classical economists revolutionized macroeconomic theory, and that the theory is much better for it. Paul Krugman (PK) and I have disagreed on this point before. ...
But this is not where the real disagreement between PK and SW lies. The New Classical revolution became the New Neoclassical Synthesis, with New Keynesian theory essentially taking the ideas of the revolutionaries and adapting Keynesian theory to incorporate them. Once again, I believe this was a progressive change. While there is plenty wrong with New Keynesian theory, and the microfoundations project on which it is based, I would much rather start from there than with the theory I was taught in the 1970s. As SW says “Most of us now speak the same language, and communication is good.” ...
I think the difficulty that PK and I share is with those who in effect rejected or ignored the New Neoclassical Synthesis. I can think of no reason why the New Classical economist as ‘revolutionary nerd’ should do this, which suggests that SW’s characterization is only half true. Everyone can have their opinion about particular ideas or developments, but it is not normal to largely ignore what one half of the profession is doing. Yet that seems to be what has happened in significant parts of academia.
SW likes to dismiss PK as being out of touch with current macro research. Lets look at the evidence. PK was very much at the forefront of analyzing the Zero Lower Bound problem, before that problem hit most of the world. While many point to Mike Woodford’s Jackson Hole paper as being the intellectual inspiration behind recent changes at the Fed, the technical analysis can be found in Eggertsson and Woodford, 2003. That paper’s introduction first mentions Keynes, and then Krugman’s 1998 paper on Japan. Subsequently we have Eggertsson and Krugman (2010), which is part of a flourishing research program that adds ‘financial frictions’ into the New Keynesian model. You would not think of suggesting that PK is out of touch unless you are in effect dismissing or marginalizing this whole line of research.
I would not describe the state of macro as rotten, because that appears to dismiss what most mainstream macroeconomists are doing. I would however describe it as suffering from two unhelpful biases. The first is methodological: too much of an obsession with microfoundation purity, and too little interest in evidence. The second is ideological: a legacy of the New Classical revolution that refuses to acknowledge the centrality of Keynesian insights to macroeconomics. These biases are a serious problem, partly because they can distort research effort, but also because they encourage policy makers to make major mistakes.
 The clash between Monetarism and Keynesianism was mostly a clash about policy: Friedman used the Keynesian theoretical framework, and indeed contributed greatly to it.Update: Noah Smith also comments.
 It may be legitimate to suggest someone is out of touch with macro theory if they make statements that are just inconsistent with mainstream theory, without acknowledging this to be the case. The example that most obviously comes to mind is statements like these, about the impact of fiscal policy.
 In the case of the UK, a charitable explanation for the Conservative opposition to countercyclical fiscal policy and their embrace of austerity was that they believed conventional monetary policy could always stabilize the economy. If they had taken on board PK’s analysis of Japan, or Eggertsson and Woodford, they would not have made that mistake.
Simon Wren-Lewis takes issue with Stephen Williamson's claim that "there are good reasons to think that the welfare losses from wage/price rigidity are small":
Mistaking models for reality, by Simon Wren-Lewis: In a recent post, Paul Krugman used a well known Tobin quote: it takes a lot of Harberger triangles to fill an Okun gap. For non-economists, this means that the social welfare costs of resource misallocations because prices are ‘wrong’ (because of monopoly, taxation etc) are small compared to the costs of recessions. Stephen Williamson takes issue with this idea. His argument can be roughly summarized as follows:
1) Keynesian recessions arise because prices are sticky, and therefore 'wrong', so their costs are not fundamentally different from resource misallocation costs.
2) Models of price stickiness exaggerate these costs, because their microfoundations are dubious.
3) If the welfare costs of price stickiness were significant, why are they not arbitraged away?
I’ve heard these arguments, or variations on them, many times before. So lets see why they are mistaken...
But I want to focus on this. How useful are representative agent models, e.g. New Keynesian models, for examining questions such as the costs of unemployment?:
Lets move from wage and price stickiness to the major cost of recessions: unemployment. The way that this is modeled in most New Keynesian set-ups based on representative agents is that workers cannot supply as many hours as they want. In that case, workers suffer the cost of lower incomes, but at least they get the benefit of more leisure. Here is a triangle maybe (see Nick Rowe again.) Now this grossly underestimates the cost of recessions. One reason is heterogeneity: many workers carry on working the same number of hours in a recession, but some become unemployed. Standard consumer theory tells us this generates larger aggregate costs, and with more complicated models this can be quantified. However the more important reason, which follows from heterogeneity, is that the long term unemployed typically do not think that at least they have more leisure time, so they are not so badly off. Instead they feel rejected, inadequate, despairing, and it scars them for life. Now that may not be in the microfounded models, but that does not make these feelings disappear, and certainly does not mean they should be ignored.
It is for this reason that I have always had mixed feelings about representative agent models that measure the costs of recessions and inflation in terms of the agent’s utility. In terms of modeling it has allowed business cycle costs to be measured using the same metric as the costs of distortionary taxation and under/over provision of public goods, which has been great for examining issues involving fiscal policy, for example. Much of my own research over the last decade has used this device. But it does ignore the more important reasons why we should care about recessions. Which is perhaps OK, as long as we remember this. The moment we actually think we are capturing the costs of recessions using our models in this way, we once again confuse models with reality.
What does me mean by confusing models with reality?:
The problem with modeling price rigidity is that there are too many plausible reasons for this rigidity - too many microfoundations. (Alan Blinder’s work is a classic reference here.) Microfounded models typically choose one for tractability. It is generally possible to pick holes in any particular tractable story behind price rigidity (like Calvo contracts). But it does not follow that these models of Keynesian business cycles exaggerate the size of recessions. It seems much more plausible to argue completely the opposite: because microfounded models typically only look at one source of nominal rigidity, they underestimate its extent and costs.
I could make the same point in a slightly different way. Lets suppose that we do not fully understand what causes recessions. What we do understand, in the simple models we use, accounts for small recessions, but not large ones. Therefore, large recessions cannot exist. The logic is obviously faulty, but too many economists argue this way. There appears to be a danger in only ‘modeling what we understand’ that modelers can go on to confuse models with reality.
Let me add that while this is a good argument for why the measured costs only establish a minimum bound for the total costs, I am not sure we can be confident they do that. The reason is that I am not convinced that wage and price rigidities as modeled in the New Keynesian framework adequately capture the transmission mechanism from shocks to real effects that propelled us into the Great Recession. That is, do we really think that wage and price rigidities of the Calvo variety (or of the Rotemberg variety) are the main friction behind the downturn and struggle to recover? If prices were perfectly flexible, would our problems be over? Would they have never begun in the first place? More flexibility in housing prices might help, but the problem was a breakdown in financial intermediation which in turn caused problems for the real sector. Capturing these effects requires abandoning the representative agent framework, connecting the real and financial sectors, and then endogenizing financial cycles. There is progress on this front, but in my view existing models are simply unable to adequately capture these effects.
If this is true, if existing models do not adequately capture the transmission of financial shocks to changes in output and employment, if our models miss a fundamental mechanism at work in the recession, why should we believe estimates of fiscal multipliers, welfare effects, and so on based upon models that assume shocks are transmitted through moderate price rigidities? I think these models are good at capturing mild business cycles like we experienced during the Great Moderation, but I question their value in large, persistent, recessions induced by large financial shocks.
[For more on macro models, see Paul Krugman's The Dismal State of the Dismal Science and the links he provides in his discussion.]
Stephen Williamson notes an interview of Robert Lucas:
ED: If the economy is currently in an unusual state, do micro-foundations still have a role to play?RL: "Micro-foundations"? We know we can write down internally consistent equilibrium models where people have risk aversion parameters of 200 or where a 20% decrease in the monetary base results in a 20% decline in all prices and has no other effects. The "foundations" of these models don't guarantee empirical success or policy usefulness.What is important---and this is straight out of Kydland and Prescott---is that if a model is formulated so that its parameters are economically-interpretable they will have implications for many different data sets. An aggregate theory of consumption and income movements over time should be consistent with cross-section and panel evidence (Friedman and Modigliani). An estimate of risk aversion should fit the wide variety of situations involving uncertainty that we can observe (Mehra and Prescott). Estimates of labor supply should be consistent aggregate employment movements over time as well as cross-section, panel, and lifecycle evidence (Rogerson). This kind of cross-validation (or invalidation!) is only possible with models that have clear underlying economics: micro-foundations, if you like.
This is bread-and-butter stuff in the hard sciences. You try to estimate a given parameter in as many ways as you can, consistent with the same theory. If you can reduce a 3 orders of magnitude discrepancy to 1 order of magnitude you are making progress. Real science is hard work and you take what you can get.
"Unusual state"? Is that what we call it when our favorite models don't deliver what we had hoped? I would call that our usual state.
... Imagine economists had widely and credibly warned of a financial crisis in the mid-00s. People would have responded to such warnings by lending less and borrowing less (I'm ignoring agency problems here). But this would have resulted in less gearing and so no crisis. There would now be a crisis in economics as everyone wondered why the disaster we predicted never happened. ...His main point, however, revolves around Keynes' statement that "If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid":
I suspect there's another reason why economics is thought to be in crisis. It's because, as Coase says, (some? many?) economists lost sight of ordinary life and people, preferring to be policy advisors, theorists or - worst of all - forecasters.
In doing this, many stopped even trying to pursue Keynes' goal. What sort of reputation would dentists have if they stopped dealing with people's teeth and preferred to give the government advice on dental policy, tried to forecast the prevalence of tooth decay or called for new ways of conceptualizing mouths?
Perhaps, then, the problem with economists is that they failed to consider what function the profession can reasonably serve.
...As one of the 10 most expensive private colleges in the US, Carnegie Mellon in Pittsburgh almost oppresses visitors with neo-gothic grandness... I was a guest of Carol Goldburg, the director of CMU's undergraduate economics program, who had gathered a few colleagues to give their take on the presidential election. Here were four top economists huddled round a lunch table: they were surely going to regale me with talk of labor-market policy, global imbalances, marginal tax rates.
My opener was an easy ball: how did they think President Obama had done? Sevin Yeltekin, an expert on political economy, was the first to respond: "He hasn't delivered on a lot of his promises, but he inherited a big mess. I'd give him a solid B."
I threw the same question to her neighbor and one of America's most renowned rightwing economists, Allan Meltzer. He snapped: "A straight F: he took a mess and made it even bigger." Then came Goldburg, now wearing the look of a hostess whose guests are falling out: "Well, I'm concerned about welfare and poverty, and Obama's tried hard on those issues." A tentative pause. "B-minus?"
Finally it was the turn of Bennett McCallum, author of such refined works as Multiple-Solution Indeterminacies in Monetary Policy Analysis. Surely he would bring the much-needed technical ballast? Um, no. "D: he's trying to turn this country into France."
Some of these comments were surely made for the benefit of their audience: faced with a mere scribbler, the scholars had evidently decided to hold the algebra, and instead talk human. Even so, this was a remarkable row. Here were four economists on the same faculty, who probably taught some of the same students; yet Obama's reputation depended on entirely on who was doing the assessment. The president was either B or F, good or a failure: opposite poles with no middle ground, and not even a joint agreement of the judging criteria. ...
Via email, Maurizio Bovi describes a paper of his on adaptive learning (M. Bovi (2012). "Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?" Journal of Economic Dynamics and Control). A colleague of mine, George Evans -- a leader in this area -- responds:
Are you a good econometrician? No, I am British!, by Maurizio Bovi*: A typical assumption of mainstream strands of research is that agents’ expectations are grounded in efficient econometric models. Muthian agents are all equally rational and know the true model. The adaptive learning literature assumes that agents are boundedly rational in the sense that they are as smart as econometricians and that they are able to learn the correct model. The predictor choice approach argues that individuals are boundedly rational in the sense that agents switch to the forecasting rule that has the highest fitness. Preferences could generate enduring inertia in the dynamic switching process and a stationary environment for a sufficiently long period is necessary to learn the correct model. Having said this, all the cited approaches typically argue that there is a general tendency to forecast via optimal forecasting models because of the costs stemming from inefficient predictions.
To the extent that the representative agent’s beliefs i) are based on efficient (in terms of minimum MSE=mean squared forecasting errors) econometric models, and ii) can be captured by ad hoc surveys, two basic facts emerge, stimulating my curiosity. First, in economic systems where the same simple model turns out to be the best predictor for a sufficient span of time survey expectations should tend to converge: more and more individuals should learn or select it. Second, the forecasting fitness of this enduring minimum MSE econometric model should not be further enhanced by the use of information provided by survey expectations. If agents act as if they were statisticians in the sense that they use efficient forecasting rules, then survey-based beliefs must reflect this and cannot contain any statistically significant information that helps reduce the MSE relative to the best econometric predictor. In sum, there could be some value in analyzing hard data and survey beliefs to understand i) whether these latter derive from optimal econometric models and ii) the time connections between survey-declared and efficient model-grounded expectations. By examining real-time GDP dynamics in the UK I have found that, over a time-span of two decades, the adaptive expectations (AE) model systematically outperforms other standard predictors which, as argued by the above recalled literature, should be in the tool-box of representative econometricians (Random Walk, ARIMA, VAR). As mentioned, this peculiar environment should eventually lead to increased homogeneity in best-model based expectations. However data collected in the surveys managed by the Business Surveys Unit of the European Commission (European Commission, 2007) highlight that great variety in expectations persists. Figure 1 shows that in the UK the number of optimists and pessimists tend to be rather similar at least since the inception of data1 availability (1985).
In addition, evidence points to one-way information flows going from survey data to econometric models. In particular, Granger-causality, variance decomposition and Geweke’s instantaneous feedback tests suggest that the accuracy of the AE forecasting model can be further enhanced by the use of the information provided by the level of disagreement across survey beliefs. That is, as per GDP dynamics in the UK, the expectation feedback system looks like an open loop where possibly non-econometrically based beliefs play a key role with respect to realizations. All this affects the general validity of the widespread assumption that representative agents’ beliefs derive from optimal econometric models.
Results are robust to several methods of quantifications of qualitative survey observations as well as to standard forecasting rules estimated both recursively and via optimal-size rolling windows. They are also in line both with the literature supporting the non-econometrically-based content of the information captured by surveys carried out on laypeople and, interpreting MSE as a measure of volatility, with the stylized fact on the positive correlation between dispersion in beliefs and macroeconomic uncertainty.
All in all, our evidence raises some intriguing questions: Why do representative UK citizens seem to be systematically more boundedly rational than what is usually hypothesized in the adaptive learning literature and the predictor choice approach? What does it persistently hamper them to use the most accurate statistical model? Are there econometric (objective) or psychological (subjective) impediments?
*Italian National Institute of Statistics (ISTAT), Department of Forecasting and Economic Analysis. The opinions expressed herein are those of the author (E-mail email@example.com) and do not necessarily reflect the views of ISTAT.
 The question is “How do you expect the general economic situation in the country to develop over the next 12 months?” Respondents may reply “it will…: i) get a lot better, ii) get a little better, iii) stay the same, iv) get a little worse, v) get a lot worse, vi) I do not know. See European Commission (1997).
European Commission (2007). The Joint Harmonised EU Programme of Business and Consumer Surveys, User Guide, European Commission, Directorate-General for Economic and Financial Affairs, July.
M. Bovi (2012). “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?” Journal of Economic Dynamics and Control DOI: 10.1016/j.jedc.2012.10.005.
Here's the response from George Evans:
Comments on Maurizio Bovi, “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?”, by George Evans, University of Oregon: This is an interesting paper that has a lot of common ground with the adaptive learning literature. The techniques and a number of the arguments will be familiar to those of us who work in adaptive learning. The tenets of the adaptive learning approach can be summarized as follows: (1) Fully “rational expectations” (RE) are implausibly strong and implicitly ignores a coordination issue that arises because economic outcomes are affected by the expectations of firms and households (economic “agents”). (2) A more plausible view is that agents have bounded rationality with a degree of rationality comparable to economists themselves (the “cognitive consistency principle”). For example agents’ expectations might be based on statistical models that are revised and updated over time. On this approach we avoid assuming that agents are smarter than economists, but we also recognize that agents will not go on forever making systematic errors. (3) We should recognize that economic agents, like economists, do not agree on a single forecasting model. The economy is complex. Therefore, agents are likely to use misspecified models and to have heterogeneous expectations.
The focus of the adaptive learning literature has changed over time. The early focus was on whether agents using statistical learning rules would or would not eventually converge to RE, while the main emphasis now is on the ways in which adaptive learning can generate new dynamics, e.g. through discounting of older data and/or switching between forecasting models over time. I use the term “adaptive learning” broadly, to include, for example, the dynamic predictor selection literature.
Bovi’s paper “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models” argues that with respect to GDP growth in the UK the answer to his question is no because 1) there is a single efficient econometric model, which is a version of AE (adaptive expectations), and 2) agents might be expected therefore to have learned to adopt this optimal forecasting model over time. However the degree of heterogeneity of expectations has not fallen over time, and thus agents are failing to learn to use the best forecasting model.
From the adaptive learning perspective, Bovi’s first result is intriguing, and merits further investigation, but his approach will look very familiar to those of us who work in adaptive learning. And the second point will surprise few of us: the extent of heterogeneous expectations is well-known, as is the fact that expectations remain persistently heterogeneous, and there is considerable work within adaptive learning that models this heterogeneity.
1) Bovi’s “efficient” model uses AE with the adaptive expectations parameter gamma updated over time in a way that aims to minimize the squared forecast error. This is in fact a simple adaptive learning model, which was proposed and studied in Evans and Ramey, “Adaptive expectations, underparameterization and the Lucas critique”, Journal of Monetary Economics (2006). We there suggested that agents might want to use AE as an optimal choice for a parsimonious (underparameterized) forecasting rule, showed what would determine the optimal choice of gamma, and provided an adaptive learning algorithm that would allow agents to update their choice of gamma over time in order to track unknown structural change. (Our adaptive learning rule exploits the fact that AE can be viewed as the forecast that arises from an IMA(1,1) time-series model, and in our rule the MA parameter is estimated and updated recursively using a constant gain rule.)
2) At the same time I am suspicious that economists will agree that there is a single best way to forecast GDP growth. For the US there is a lot of work by numerous researchers that strongly indicates that choosing between univariate time-series models is controversial, i.e. there appears to be no single clearly best univariate forecasting model, and (ii) forecasting models for GDP growth should be multivariate and should include both current & lagged unemployment rates and the consumption to GDP ratio. Other forecasters have found a role for nonlinear (Markov-switching) dynamics. Thus I doubt that there will be agreement by economists on a single best forecasting model for GDP growth or other key macro variables. Hence we should expect households and firms also to entertain multiple forecasting models, and for different agents to use different models.
3) Even if there were a single forecasting model that clearly dominated, one would not expect homogeneity of expectations across agents or for heterogeneity to disappear over time. In Evans and Honkapohja, “Learning as a Rational Foundation for Macroeconomics and Finance”, forthcoming 2013 in R Frydman and E Phelps, Rethinking Expectations: The Way Forward for Macroeconomics, we point out that variations across agents in the extent of discounting and the frequency with which agents update parameter estimates, as well as the inclusion of idiosyncratic exogenous expectation shocks, will give rise to persistent heterogeneity. There are costs to forecasting, and some agents will have larger benefits from more accurate forecasts than other agents. For example, for some agents the forecast method advocated by Bovi will be too costly and an even simpler forecast will be adequate (e.g. a RW forecast that the coming year will be like last year, or a forecast based on mean growth over, say, the last five years).
4) When there are multiple models potentially in play, as there always is, the dynamic predictor selection approach initiated by Brock and Hommes means that because of varying costs of forecast methods, and heterogeneous costs across agents, not all agents will want to use what appears to be the best performing model. We therefore expect heterogeneous expectations at any moment in time. I do not regard this as a violation of the cognitive consistency principle – even economists will find that in some circumstances in their personal decision-making they use more boundedly rational forecast methods than in other situations in which the stakes are high.
In conclusion, here is my two sentence summary for Maurizio Bovi: Your paper will find an interested audience among those of us who work in this area. Welcome to the adaptive learning approach.
Should New Keynesian models include a specific role for money (over and above specifying the interest rate as the policy variable)? This is a highly wonkish, but mostly accessible explanation from Bennett McCallum:
The Role of Money in New-Keynesian Models, by Bennett T. McCallum, Carnegie Mellon University, National Bureau of Economic Research, N° 2012-019 Serie de Documentos de Trabajo Working Paper series Octubre 2012
Here's the bottom line:
...we drew several conclusions supportive of the idea that a central bank that ignores money and banking will seriously misjudge the proper interest rate policy action to stabilize inflation in response to a productivity shock in the production function for output. Unfortunately, some readers discovered an error; we made a mistake in linearization that, when corrected, greatly diminished the magnitude of some of the effects of including the banking sector. There seems now to be some interest in developing improved models of this type. Marvin Goodfriend (MG) is working with a PhD student in this topic. At this point I have not been able to give a convincing argument that one needs to include M. ...
There is one respect in which it is nevertheless the case that a rule for the monetary base is superior to a rule for the interbank interest rate. In this context we are clearly discussing the choice of a controllable instrument variable—not one of the "target rules" favored by Svensson and Woodford, which are more correctly called "targets." Suppose that the central bank desires for its rule to be verifiable by the public. Then it will arguably need to be a non-activist rule, one that normally keeps the instrument setting unchanged over long spans of time. In that case we know that in the context of a standard NK model, an interest rate instrument will not be viable. That is, the rule will not satisfy the Taylor Principle, which is necessary for "determinacy." The latter condition is not, I argue, what is crucial for well-designed monetary policy, but LS learnability is, and it is not present when the TP is not satisfied. This is well known from, e.g., Evans and Honkapohja (2001), Bullard and Mitra (2002), McCallum (2003, 2009). ...
Alan Kirman on how macroeconomics needs to change (I'm still thinking about his idea that the economy should be modeled as "a system which self organizes, experiencing sudden and large changes from time to time"):
What’s the use of economics?, by Alan Kirman, Vox EU: The simple question that was raised during a recent conference organized by Diane Coyle at the Bank of England was to what extent has - or should - the teaching of economics be modified in the light of the current economic crisis? The simple answer is that the economics profession is unlikely to change. Why would economists be willing to give up much of their human capital, painstakingly nurtured for over two centuries? For macroeconomists in particular, the reaction has been to suggest that modifications of existing models to take account of ‘frictions’ or ‘imperfections’ will be enough to account for the current evolution of the world economy. The idea is that once students have understood the basics, they can be introduced to these modifications.
A turning point in economics
However, other economists such as myself feel that we have finally reached the turning point in economics where we have to radically change the way we conceive of and model the economy. The crisis is an opportune occasion to carefully investigate new approaches. Paul Seabright hit the nail on the head; economists tend to inaccurately portray their work as a steady and relentless improvement of their models whereas, actually, economists tend to chase an empirical reality that is changing just as fast as their modeling. I would go further; rather than making steady progress towards explaining economic phenomena professional economists have been locked into a narrow vision of the economy. We constantly make more and more sophisticated models within that vision until, as Bob Solow put it, “the uninitiated peasant is left wondering what planet he or she is on” (Solow 2006).
In this column, I will briefly outline some of the problems the discipline of economics faces; problems that have been shown up in stark relief during the current crisis. Then I will come back to what we should try to teach students of economics.
Entrenched views on theory and reality
The typical attitude of economists is epitomized by Mario Draghi, President of the European Central Bank. Regarding the Eurozone crisis, he said:
“The first thing that came to mind was something that people said many years ago and then stopped saying it: The euro is like a bumblebee. This is a mystery of nature because it shouldn’t fly but instead it does. So the euro was a bumblebee that flew very well for several years. And now – and I think people ask ‘how come?’ – probably there was something in the atmosphere, in the air, that made the bumblebee fly. Now something must have changed in the air, and we know what after the financial crisis. The bumblebee would have to graduate to a real bee. And that’s what it’s doing” (Draghi 2012)
What Draghi is saying is that, according to our economic models, the Eurozone should not have flown. Entomologists (those who study insects) of old with more simple models came to the conclusion that bumble bees should not be able to fly. Their reaction was to later rethink their models in light of irrefutable evidence. Yet, the economist’s instinct is to attempt to modify reality in order to fit a model that has been built on longstanding theory. Unfortunately, that very theory is itself based on shaky foundations.
Economic theory can mislead
Every student in economics is faced with the model of the isolated optimizing individual who makes his choices within the constraints imposed by the market. Somehow, the axioms of rationality imposed on this individual are not very convincing, particularly to first time students. But the student is told that the aim of the exercise is to show that there is an equilibrium, there can be prices that will clear all markets simultaneously. And, furthermore, the student is taught that such an equilibrium has desirable welfare properties. Importantly, the student is told that since the 1970s it has been known that whilst such a system of equilibrium prices may exist, we cannot show that the economy would ever reach an equilibrium nor that such an equilibrium is unique.
The student then moves on to macroeconomics and is told that the aggregate economy or market behaves just like the average individual she has just studied. She is not told that these general models in fact poorly reflect reality. For the macroeconomist, this is a boon since he can now analyze the aggregate allocations in an economy as though they were the result of the rational choices made by one individual. The student may find this even more difficult to swallow when she is aware that peoples’ preferences, choices and forecasts are often influenced by those of the other participants in the economy. Students take a long time to accept the idea that the economy’s choices can be assimilated to those of one individual.
A troubling choice for macroeconomists
Macroeconomists are faced with a stark choice: either move away from the idea that we can pursue our macroeconomic analysis whilst only making assumptions about isolated individuals, ignoring interaction; or avoid all the fundamental problems by assuming that the economy is always in equilibrium, forgetting about how it ever got there.
Exogenous shocks? Or a self-organizing system?
Macroeconomists therefore worry about something that seems, to the uninformed outsider, paradoxical. How does the economy experience fluctuations or cycles whilst remaining in equilibrium? The basic macroeconomic idea is, of course, that the economy is in a steady state and that it is hit from time to time by exogenous shocks. Yet, this is entirely at variance with the idea that economists may be dealing with a system which self organizes, experiencing sudden and large changes from time to time.
There are two reasons as to why the latter explanation is better than the former. First, it is very difficult to find significant events that we can point to in order to explain major turning points in the evolution of economies. Second, the idea that the economy is sailing on an equilibrium path but is from time to time buffeted by unexpected storms just does not pass what Bob Solow has called the ‘smell test’. To quote Willem Buiter (2009),
“Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive”
Some teaching suggestions
New thinking is imperative:
- We should spend more time insisting on the importance of coordination as the main problem of modern economies rather than efficiency. Our insistence on the latter has diverted attention from the former.
- We should cease to insist on the idea that the aggregation of the choices and actions of individuals who directly interact with each other can be captured by the idea of the aggregate acting as only one of these many individuals. The gap between micro- and macrobehavior is worrying.
- We should recognize that some of the characteristics of aggregates are caused by aggregation itself. The continuous reaction of the aggregate may be the result of individuals making simple, binary discontinuous choices. For many phenomena, it is much more realistic to think of individuals as having thresholds - which cause them to react - rather than reacting in a smooth, gradual fashion to changes in their environment. Cournot had this idea, it is a pity that we have lost sight of it. Indeed, the aggregate itself may also have thresholds which cause it to react. When enough individuals make a particular choice, the whole of society may then move. When the number of individuals is smaller, there is no such movement. One has only to think of the results of voting.
- All students should be obliged to collect their own data about some economic phenomenon at least once in their career. They will then get a feeling for the importance of institutions and of the interaction between agents and its consequences. Perhaps, best of all, this will restore their enthusiasm for economics!
Some use for traditional theory
Does this mean that we should cease to teach ‘standard’ economic theory to our students? Surely not. If we did so, these students would not be able to follow the current economic debates. As Max Planck has said, “Physics is not about discovering the natural laws that govern the universe, it is what physicists do”. For the moment, standard economics is what economists do. But we owe it to our students to point out difficulties with the structure and assumptions of our theory. Although we are still far from a paradigm shift, in the longer run the paradigm will inevitably change. We would all do well to remember that current economic thought will one day be taught as history of economic thought.
Buiter, W (2009), “The unfortunate uselessness of most ‘state of the art’ academic monetary economics”, Financial Times online, 3 March.
Coyle, D (2012) “What’s the use of economics? Introduction to the Vox debate”, VoxEu.org, 19 September.
Davies, H (2012), “Economics in Denial”, ProjectSyndicate.org, 22 August.
Solow, R (2006), “Reflections on the Survey” in Colander, D., The Making of an Economist. Princeton, Princeton University Press.
[This one is wonkish. It's (I think) one of the more important papers from the St. Louis Fed conference.]
One thing that doesn't get enough attention in DSGE models, at least in my opinion, is the constraints, implicit assumptions, etc. imposed when the theoretical model is log-linearized. This paper by Tony Braun and Yuichiro Waki helps to fill that void by comparing a theoretical true economy to its log-linearized counterpart, and showing that the results of the two models can be quite different when the economy is at the zero bound. For example, multipliers that are greater than two in the log-linearized version are smaller -- usually near one -- in the true model (thus, fiscal policy remains effective, but may need to be more aggressive than the log-linear model would imply). Other results change as well, and there are sign changes in some cases, leading the authors to conclude that "we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps."
Here's part of the introduction and the conclusion to the paper:
Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero, by Tony Braun and Yuichiro Waki: Abstract Does fiscal policy have qualitatively different effects on the economy in a liquidity trap? We analyze a nonlinear stochastic New Keynesian model and compare the true and log-linearized equilibria. Using the log-linearized equilibrium conditions the answer to the above question is yes. However, for the true nonlinear model the answer is no. For a broad range of empirically relevant parameterizations labor falls in response to a tax cut in the log-linearized economy but rises in the true economy. While the government purchase multiplier is above two in the log-linearized economy it is about one in the true economy.
1 Introduction The recent experiences of Japan, the United States, and Europe with zero/near-zero nominal interest rates have raised new questions about the conduct of monetary and fiscal policy in a liquidity trap. A large and growing body of new research has emerged that provides answers using New Keynesian (NK) frameworks that explicitly model the zero bound on the nominal interest rate. One conclusion that has emerged is that fiscal policy has different effects on the economy when the nominal interest rate is zero. Eggertsson (2011) finds that hours worked fall in response to a labor tax cut when the nominal interest rate is zero, a property that is referred to as the “paradox of toil,” and Christiano, Eichenbaum, and Rebelo (2011), Woodford (2011) and Erceg and Lindé (2010) find that the size of the government purchase multiplier is substantially larger than one when the nominal interest rate is zero.
These and other results ( see e.g. Del Negro, Eggertsson, Ferrero, and Kiyotaki (2010), Bodenstein, Erceg, and Guerrieri (2009), Eggertsson and Krugman (2010)) have been derived in setups that respect the nonlinearity in the Taylor rule but loglinearize the remaining equilibrium conditions about a steady state with a stable price level. Log-linearized NK models require large shocks to generate a binding zero lower bound for the nominal interest rate and the shocks must be even larger if these models are to reproduce the measured declines in output and inflation that occurred during the Great Depression or the Great Recession of 2007-2009. Log-linearizations are local solutions that only work within a given radius of the point where the approximation is taken. Outside of this radius these solutions break down (See e.g. Den Haan and Rendahl (2009)). The objective of this paper is to document that such a breakdown can occur when analyzing the zero bound.
We study the properties of a nonlinear stochastic NK model when the nominal interest rate is constrained at its zero lower bound. Our tractable framework allows us to provide a partial analytic characterization of equilibrium and to numerically compute all equilibria when the zero interest state is persistent. There are no approximations needed when computing equilibria and our numerical solutions are accurate up to the precision of the computer. A comparison with the log-linearized equilibrium identifies a severe breakdown of the log-linearized approximate solution. This breakdown occurs when using parameterizations of the model that reproduce the U.S. Great Depression and the U.S. Great Recession.
Conditions for existence and uniqueness of equilibrium based on the log-linearized equilibrium conditions are incorrect and offer little or no guidance for existence and uniqueness of equilibrium in the true economy. The characterization of equilibrium is also incorrect.
These three unpleasant properties of the log-linearized solution have the implication that relying on it to make inferences about the properties of fiscal policy in a liquidity trap can be highly misleading. Empirically relevant parameterization/shock combinations that yield the paradox of toil in the log-linearized economy produce orthodox responses of hours worked in the true economy. The same parameterization/shock combinations that yield large government purchases multipliers in excess of two in the log-linearized economy, produce government purchase multipliers as low as 1.09 in the nonlinear economy. Indeed, we find that the most plausible parameterizations of the nonlinear model have the property that there is no paradox of toil and that the government purchase multiplier is close to one.
We make these points using a stochastic NK model that is similar to specifications considered in Eggertsson (2011) and Woodford (2011). The Taylor rule respects the zero lower bound of the nominal interest rate, and a preference discount factor shock that follows a two state Markov chain produces a state where the interest rate is zero. We assume Rotemberg (1996) price adjustment costs, instead of Calvo price setting. When log-linearized, this assumption is innocuous - the equilibrium conditions for our model are identical to those in Eggertsson (2011) and Woodford (2011), with a suitable choice of the price adjustment cost parameter. Moreover, the nonlinear economy doesn’t have any endogenous state variables, and the equilibrium conditions for hours and inflation can be reduced to two nonlinear equations in these two variables when the zero bound is binding.
These two nonlinear equations are easy to solve and are the nonlinear analogues of what Eggertsson (2011) and Eggertsson and Krugman (2010) refer to as “aggregate demand” (AD) and “aggregate supply” (AS) schedules. This makes it possible for us to identify and relate the sources of the approximation errors associated with using log-linearizations to the shapes and slopes of these curves, and to also provide graphical intuition for the qualitative differences between the log-linear and nonlinear economies.
Our analysis proceeds in the following way. We first provide a complete characterization of the set of time invariant Markov zero bound equilibria in the log-linearized economy. Then we go on to characterize equilibrium of the nonlinear economy. Finally, we compare the two economies and document the nature and source of the breakdowns associated with using log-linearized equilibrium conditions. An important distinction between the nonlinear and log-linearized economy relates to the resource cost of price adjustment. This cost changes endogenously as inflation changes in the nonlinear model and modeling this cost has significant consequences for the model’s properties in the zero bound state. In the nonlinear model a labor tax cut can increase hours worked and decrease inflation when the interest rate is zero. No equilibrium of the log-linearized model has this property. We show that this and other differences in the properties of the two models is precisely due to the fact that the resource cost of price adjustment is absent from the resource constraint of the log-linearized model. ...
5 Concluding remarks In this paper we have documented that it can be very misleading to rely on the log-linearized economy to make inferences about existence of an equilibrium, uniqueness of equilibrium or to characterize the local dynamics of equilibrium. We have illustrated that these problems arise in empirically relevant parameterizations of the model that have been chosen to match observations from the Great Depression and Great Recession.
We have also documented the response of the economy to fiscal shocks in calibrated versions of our nonlinear model. We found that the paradox of toil is not a robust property of the nonlinear model and that it is quantitatively small even when it occurs. Similarly, the evidence presented here suggests that the government purchase GDP multiplier is not much above one in our nonlinear economy.
Although we encountered situations where the log-linearized solution worked reasonably well and the model exhibited the paradox of toil and a government purchase multiplier above one, the magnitude of these effects was quantitatively small. This result was also very tenuous. There is no simple characterization of when the log-linearization works well. Breakdowns can occur in regions of the parameter space that are very close to ones where the log-linear solution works. In fact, it is hard to draw any conclusions about when one can safely rely on log-linearized solutions in this setting without also solving the nonlinear model. For these reasons we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps.
This raises a question. How should one proceed with solution and estimation of medium or large scale NK models with multiple shocks and endogenous state variables when considering episodes with zero nominal interest rates? One way forward is proposed in work by Adjemian and Juillard (2010) and Braun and Körber (2011). These papers solve NK models using extended path algorithms.
We conclude by briefly discussing some extensions of our analysis. In this paper we assumed that the discount factor shock followed a time-homogeneous two state Markov chain with no shock being the absorbing state. In our current work we relax this final assumption and consider general Markov switching stochastic equilibria in which there are repeated swings between episodes with a positive interest rate and zero interest rates. We are also interested in understanding the properties of optimal monetary policy in the nonlinear model. Eggertsson and Woodford (2003), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006), Nakov (2008), and Werning (2011) consider optimal monetary policy problems subject to a non-negativity constraint on the nominal interest rate, using implementability conditions derived from log-linearized equilibrium conditions. The results documented here suggest that the properties of an optimal monetary policy could be different if one uses the nonlinear implementability conditions instead.
 Eggertsson (2011) requires a 5.47% annualized shock to the preference discount factor in order to account for the large output and inflation declines that occurred in the Great Depression. Coenen, Orphanides, and Wieland (2004) estimate a NK model to U.S. data from 1980-1999 and find that only very large shocks produce a binding zero nominal interest rate.
 Under Calvo price setting, in the nonlinear economy a particular moment of the price distribution is an endogenous state variable and it is no longer possible to compute an exact solution to the equilibrium.
 This distinction between the log-linearized and nonlinear resource constraint is not specific to our model of adjustment costs but also arises under Calvo price adjustment (see e.g. Braun and Waki (2010)).
This is from Andrew Haldane, Executive Director, Financial Stability, Bank of England:
What have the economists ever done for us?, by Andrew G Haldane, Vox EU: There is a long list of culprits when it comes to assigning blame for the financial crisis. At least in this instance, failure has just as many parents as success. But among the guilty parties, economists played a special role in contributing to the problem. We are duty bound to be part of the solution (see Coyle 2012). Our role in the crisis was, in a nutshell, the result of succumbing to an intellectual virus which took hold of the body financial from the 1990s onwards.
One strain of this virus is an old one. Cycles in money and bank credit are familiar from centuries past. And yet, for perhaps a generation, the symptoms of this old virus were left untreated. That neglect allowed the infection to spread from the financial system to the real economy, with near-fatal consequences for both.
In many ways, this was an odd disease to have contracted. The symptoms should have been all too obvious from history. The interplay of bank money and credit and the wider economy has been pivotal to the mandate of central banks for centuries. For at least a century, that was recognized in the design of public policy frameworks. The management of bank money and credit was a clear public policy prerequisite for maintaining broader macroeconomic and social stability.
Two developments – one academic, one policy-related – appear to have been responsible for this surprising memory loss. The first was the emergence of micro-founded dynamic stochastic general equilibrium (DGSE) models in economics. Because these models were built on real-business-cycle foundations, financial factors (asset prices, money and credit) played distinctly second fiddle, if they played a role at all.
The second was an accompanying neglect for aggregate money and credit conditions in the construction of public policy frameworks. Inflation targeting assumed primacy as a monetary policy framework, with little role for commercial banks' balance sheets as either an end or an intermediate objective. And regulation of financial firms was in many cases taken out of the hands of central banks and delegated to separate supervisory agencies with an institution-specific, non-monetary focus.
Coincidentally or not, what happened next was extraordinary. Commercial banks' balance sheets grew by the largest amount in human history. For example, having flat-lined for a century, bank assets-to-GDP in the UK rose by an order of magnitude from 1970 onwards. A similar pattern was found in other advanced economies.
This balance sheet explosion was, in one sense, no one’s fault and no one’s responsibility. Not monetary policy authorities, whose focus was now inflation and whose models scarcely permitted bank balance sheets a walk-on role. And not financial regulators, whose focus was on the strength of individual financial institutions.
Yet this policy neglect has since shown itself to be far from benign. The lessons of financial history have been painfully re-taught since 2008. They need not be forgotten again. This has important implications for the economics profession and for the teaching of economics. For one, it underscores the importance of sub-disciplines such as economic and financial history. As Galbraith said, "There can be few fields of human endeavor in which history counts for so little as in the world of finance." Economics can ill afford to re-commit that crime.
Second, it underlines the importance of reinstating money, credit and banking in the core curriculum, as well as refocusing on models of the interplay between economic and financial systems. These are areas that also fell out of fashion during the pre-crisis boom.
Third, the crisis showed that institutions really matter, be it commercial banks or central banks, when making sense of crises, their genesis and aftermath. They too were conveniently, but irresponsibly, airbrushed out of workhorse models. They now needed to be repainted back in.
The second strain of intellectual virus is a new, more virulent one. This has been made dangerous by increased integration of markets of all types, economic, but especially financial and social. In a tightly woven financial and social web, the contagious consequences of a single event can thus bring the world to its knees. That was the Lehman Brothers story.
These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.
These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics.
The financial system is an archetypical complex, adaptive socioeconomic system – and has become more so over time. In the early years of this century, financial chains lengthened dramatically, system-wide maturity mismatches widened alarmingly and intrafinancial system claims ballooned exponentially. The system became, in consequence, a hostage to its weakest link. When that broke, so too did the system as a whole. Communications networks and social media then propagated fear globally.
Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behavior and thus shape outcomes.
This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modeling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.
Economics in Denial, by Howard Davies, Commentary, Project Syndicate: In an exasperated outburst, just before he left the presidency of the European Central Bank, Jean-Claude Trichet complained that, “as a policymaker during the crisis, I found the available [economic and financial] models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.” ... It was a ... serious indictment of the economics profession, not to mention all those extravagantly rewarded finance professors in business schools from Harvard to Hyderabad. ...
But it is not clear that a majority of the profession yet accepts [this]... The so-called “Chicago School” has mounted a robust defense of its rational expectations-based approach, rejecting the notion that a rethink is required. The Nobel laureate economist Robert Lucas has argued that the crisis was not predicted because economic theory predicts that such events cannot be predicted. So all is well. ...
We should not focus attention exclusively on economists, however. Arguably the elements of the conventional intellectual toolkit found most wanting are the capital asset pricing model and its close cousin, the efficient-market hypothesis. Yet their protagonists see no problems to address.
On the contrary, the University of Chicago’s Eugene Fama has described the notion that finance theory was at fault as “a fantasy,” and argues that “financial markets and financial institutions were casualties rather than causes of the recession.” And the efficient-market hypothesis that he championed cannot be blamed...
Fortunately, others in the profession ... have been chastened by the events of the last five years... They are working hard ... to develop new approaches...
There is resistance from the old guard, but I'm modestly optimistic. Some people are trying to ask, and answer, the right questions. However, it's a slow process.
From the archives (September 2009), for no particular reason:
New Old Keynesians: There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price sluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward to the IS-LM model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes it didn't, but that was no reason to ignore the answers when it did.
[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc. But as noted here, if you are going to take guidance from the older models it is essential that you understand their limitations -- these models should not be used without a thorough knowledge of the pitfalls involved and where they can and cannot be avoided -- the kind of knowledge someone like Paul Krugman surely has at hand.]
Charles Plosser, President of the Philadelphia Fed, explains the limitations of DSGE models, particularly models of the New Keynesian variety used for policy analysis. (He doesn't reject the DSGE methodology, and that will disappoint some, but he does raise good questions about this class of models. I believe the macroeconomics literature is going to fully explore these micro-founded, forward looking, optimizing models whether critics like it or not, so we may as well get on with it. The questions raised below help to clarify the direction the research should take, and in the end the models will either prove worthy, or be cast aside. In the meantime, I hope the macroeconomics profession will become more open to alternative ideas/models than it has been in the recent past, but I doubt the humility needed for that to happen has taken hold despite all the problems with these models that were exposed by the housing and financial crises.):
Macro Models and Monetary Policy Analysis, by Charles I. Plosser, President and Chief Executive Officer, Federal Reserve Bank of Philadelphia, Bundesbank — Federal Reserve Bank of Philadelphia Spring 2012 Research Conference, Eltville, Germany, May 25, 2012: Introduction ...After spending over 30 years in academia, I have served the last six years as a policymaker trying to apply what economics has taught me. Needless to say, I picked a challenging time to undertake such an endeavor. But I have learned that, despite the advances in our understanding of economics, a number of issues remain unresolved in the context of modern macro models and their use for policy analysis. In my remarks today, I will touch on some issues facing policymakers that I believe state-of-the-art macro models would do well to confront. Before continuing, I should note that I speak for myself and not the Federal Reserve System or my colleagues on the Federal Open Market Committee.
More than 40 years ago, the rational expectations revolution in macroeconomics helped to shape a consensus among economists that only unanticipated shifts in monetary policy can have real effects. According to this consensus, only monetary surprises affect the real economy in the short to medium run because consumers, workers, employers, and investors cannot respond quickly enough to offset the effect of these policy actions on consumption, the labor market, and investment.1
But over the years this consensus view on the transmission mechanism of monetary policy to the real economy has evolved. The current generation of macro models, referred to as New Keynesian DSGE models,2 rely on real and nominal frictions to transmit not only unanticipated but also systematic changes in monetary policy to the economy. Unexpected monetary shocks drive movements in output, consumption, investment, hours worked, and employment in DSGE models. However, in contrast to the earlier literature, it is the relevance of systematic movements in monetary policy that makes these models of so much interest for policy analysis. Systematic policy changes are represented in these models by Taylor-type rules, in which the policy interest rate responds to changes in inflation and a measure of real activity, such as output growth. Armed with forecasts of inflation and output growth, a central bank can assess the impact that different policy rate paths may have on the economy. The ability to do this type of policy analysis helps explain the widespread use of New Keynesian DSGE models at central banks around the world.
These modern macro models stress the importance of credibility and systematic policy, as well as forward-looking rational agents, in the determination of economic outcomes. In doing so, they offer guidance to policymakers about how to structure policies that will improve the policy framework and, therefore, economic performance. Nonetheless, I think there is room for improving the models and the advice they deliver on policy options. Before discussing several of these improvements, it is important to appreciate the “rules of the game” of the New Keynesian DSGE framework.
The New Keynesian Framework
New Keynesian DSGE models are the latest update to real business cycle, or RBC, theory. Given my own research in this area, it probably does not surprise many of you that I find the RBC paradigm a useful and valuable platform on which to build our macroeconomic models.3 One goal of real business cycle theory is to study the predictions of dynamic general equilibrium models, in which optimizing and forward-looking consumers, workers, employers, and investors are endowed with rational expectations. A shortcoming many see in the simple real business cycle model is its difficulty in internally generating persistent changes in output and employment from a transitory or temporary external shock to, say, productivity.4 The recognition of this problem has inspired variations on the simple model, of which the New Keynesian revival is an example.
The approach taken in these models is to incorporate a structure of real and nominal frictions into the real business cycle framework. These frictions are placed in DSGE models, in part, to make real economic activity respond to anticipated and unanticipated changes in monetary policy, at least, in the short to medium run. The real frictions that drive internal propagation of monetary policy often include habit formation in consumption, that is, how past consumption influences current consumption; the costs of capital used in production; and the resources expended by adding new investment to the existing stock of capital. New Keynesian DSGE models also include the costs faced by monopolistic firms and households when setting their prices and nominal wages. A nominal friction often assumed in Keynesian DSGE models is that firms and households have to wait a fixed interval of time before they can reset their prices and wages in a forward-looking, optimal manner. A rule of the game in these models is that the interactions of these nominal frictions with real frictions give rise to persistent monetary nonneutralities over the business cycle.5 It is this monetary transmission mechanism that makes the New Keynesian DSGE models attractive to central banks.
An assumption of these models is that the structure of these real and nominal frictions, which transmit changes in monetary policy to the real economy, well-approximate the true underlying rigidities of the actual economy and are not affected by changes in monetary policy. This assumption implies that the frictions faced by consumers, workers, employers, and investors cannot be eliminated at any price they might be willing to pay. Although the actors in actual economies probably recognize the incentives they have to innovate — think of the strategy to use continuous pricing on line for many goods and services — or to seek insurance to minimize the costs of the frictions, these actions and markets are ruled out by the “rules of the game” of New Keynesian DSGE modeling.
Another important rule of the game prescribes that monetary policy is represented by an interest rate or Taylor-type reaction function that policymakers are committed to follow and that everyone believes will, in fact, be followed. This ingredient of New Keynesian DSGE models most often commits a central bank to increase its policy rate when inflation or output rises above the target set by the central bank. And this commitment is assumed to be fully credible according to the rules of the game of New Keynesian DSGE models. Policy changes are then evaluated as deviations from the invariant policy rule to which policymakers are credibly committed.
The Lucas Critique Revisited with Respect to New Keynesian DSGE Models
In my view, the current rules of the game of New Keynesian DSGE models run afoul of the Lucas critique — a seminal work for my generation of macroeconomists and for each generation since.6 The Lucas critique teaches us that to do policy analysis correctly, we must understand the relationship between economic outcomes and the beliefs of economic agents about the policy regime. Equally important is the Lucas critique’s warning against using models whose structure changes with the alternative government policies under consideration.7 Policy changes are almost never once and for all. So, many economists would argue that an economic model that maps states of the world to outcomes but that does not model how policy shifts across alternative regimes would fail the Lucas critique because it would not be policy invariant.8 Instead, economists could better judge the effects of competing policy options by building models that account for the way in which policymakers switch between alternative policy regimes as economic circumstances change.9
For example, I have always been uncomfortable with the New Keynesian model’s assumption that wage and price setters have market power but, at the same time, are unable or unwilling to change prices in response to anticipated and systematic shifts in monetary policy. This suggests that the deep structure of nominal frictions in New Keynesian DSGE models should do more than measure the length of time that firms and households wait for a chance to reset their prices and wages.10 Moreover, it raises questions about the mechanism by which monetary policy shocks are transmitted to the real economy in these models.
I might also note here that the evidence from micro data on price behavior is not particularly consistent with the implications of the usual staggered price-setting assumptions in these models.11 When the real and nominal frictions of New Keynesian models do not reflect the incentives faced by economic actors in actual economies, these models violate the Lucas critique’s policy invariance dictum, and thus, the policy advice these models offer must be interpreted with caution.
From a policy perspective, the assumption that a central bank can always and everywhere credibly commit to its policy rule is, I believe, also questionable. While it is desirable for policymakers to do so — and in practice, I seek ways to make policy more systematic and more credible — commitment is a luxury few central bankers ever actually have, and fewer still faithfully follow.
During the 1980s and 1990s, it was quite common to hear in workshops and seminars the criticism that a model didn’t satisfy the Lucas critique. I thought this was often a cheap shot because almost no model satisfactorily dealt with the issue. And during a period when the policy regime was apparently fairly stable — which many argued it mostly was during those years — the failure to satisfy the Lucas critique seemed somewhat less troublesome. However, in my view, throughout the crisis of the last few years and its aftermath, the Lucas critique has become decidedly more relevant. Policy actions have become increasingly discretionary. Moreover, the financial crisis and associated policy responses have left many central banks operating with their policy rate near the zero lower bound; this means that they are no longer following a systematic rule, if they ever were. Given that central bankers are, in fact, acting in a discretionary manner, whether it is because they are at the zero bound or because they cannot or will not commit, how are we to interpret policy advice coming from models that assume full commitment to a systematic rule? I think this point is driven home by noting that a number of central banks have been openly discussing different regimes, from price-level targeting to nominal GDP targeting. In such an environment where policymakers actively debate alternative regimes, how confident can we be about the policy advice that follows from models in which that is never contemplated?
Some Directions for Furthering the Research Agenda
While I have been pointing out some limitations of DSGE models, I would like to end my remarks with six suggestions I believe would be fruitful for the research agenda.
First, I believe we should work to give the real and nominal frictions that underpin the monetary propagation mechanism of New Keynesian DSGE models deeper and more empirically supported structural foundations. There is already much work being done on this in the areas of search models applied to labor markets and studies of the behavior of prices at the firm level. Many of you at this conference have made significant contributions to this literature.
Second, on the policy dimension, the impact of the zero lower bound on central bank policy rates remains, as a central banker once said, a conundrum. The zero lower bound introduces nonlinearity into the analysis of monetary policy that macroeconomists and policymakers still do not fully understand. New Keynesian models have made some progress in solving this problem,12 but a complete understanding of the zero bound conundrum involves recasting a New Keynesian DSGE model to show how it can provide an economically meaningful story of the set of shocks, financial markets, and frictions that explain the financial crisis, the resulting recession, and the weak recovery that has followed. This might be asking a lot, but a good challenge usually produces extraordinary research.
Third, we must make progress in our analysis of credibility and commitment. The New Keynesian framework mostly assumes that policymakers are fully credible in their commitment to a specified policy rule. If that is not the case in practice, how do policymakers assess the policy advice these models deliver? Policy at the zero lower bound is a leading example of this issue. According to the New Keynesian model, zero lower bound policies rely on policymakers guiding the public’s expectations of when an initial interest rate increase will occur in the future. If the credibility of this forward guidance is questioned, evaluation of the zero lower bound policy has to account for the public's beliefs that commitment to this policy is incomplete. I have found that policymakers like to presume that their policy actions are completely credible and then engage in decisions accordingly. Yet if that presumption is wrong, those policies will not have the desired or predicted outcomes. Is there a way to design and estimate policy responses in such a world? Can reputational models be adapted for this purpose?
Fourth, and related, macroeconomists need to consider how to integrate the institutional design of central banks into our macroeconomic models. Different designs permit different degrees of discretion for a central bank. For example, responsibility for setting monetary policy is often delegated by an elected legislature to an independent central bank. However, the mandates given to central banks differ across countries. The Fed is often said to have a dual mandate; some banks have a hierarchal mandate; and others have a single mandate. Yet economists endow their New Keynesian DSGE models with strikingly uniform Taylor-type rules, always assuming complete credibility. Policy analysis might be improved by considering the institutional design of central banks and how it relates to the ability to commit and the specification of the Taylor-type rules that go into New Keynesian models. Central banks with different levels of discretion will respond differently to the same set of shocks.
Let me offer a slightly different take on this issue. Policymakers are not Ramsey social planners. They are individuals who respond to incentives like every other actor in the economy. Those incentives are often shaped by the nature of the institutions in which they operate. Yet the models we use often ignore both the institutional environment and the rational behavior of policymakers. The models often ask policymakers to undertake actions that run counter to the incentives they face. How should economists then think about the policy advice their models offer and the outcomes they should expect? How should we think about the design of our institutions? This is not an unexplored arena, but if we are to take the policy guidance from our models seriously, we must think harder about such issues in the context of our models.
This leads to my fifth suggestion. Monetary theory has given a great deal of thought to rules and credibility in the design of monetary policy, but the recent crisis suggests that we need to think more about the design of lender-of-last-resort policy and the institutional mechanism for its execution. Whether to act as the lender of last resort is discretionary, but does it have to be so? Are there ways to make it more systematic ex ante? If so, how?
My sixth and final thought concerns moral hazard, which is addressed in only a handful of models. Moral hazard looms large when one thinks about lender-of-last-resort activities. But it is also a factor when monetary policy uses discretion to deviate from its policy rule. If the central bank has credibility that it will return to the rule once it has deviated, this may not be much of a problem. On the other hand, a central bank with less credibility, or no credibility, may run the risk of inducing excessive risk-taking. An example of this might be the so-called “Greenspan put,” in which the markets perceived that when asset prices fell, the Fed would respond by reducing interest rates. Do monetary policy actions that appear to react to the stock market induce moral hazard and excessive risk-taking? Does having lender-of-last-resort powers influence the central bank’s monetary policy decisions, especially at moments when it is not clear whether the economy is in the midst of a financial crisis? Does the combination of lender-of-last-resort responsibilities with discretionary monetary policy create moral hazard perils for a central bank, encouraging it to take riskier actions? I do not know the answer to these questions, but addressing them and the other challenges I have mentioned with New Keynesian DSGE models should prove useful for evaluating the merits of different institutional designs for central banks.
The financial crisis and recession have raised new challenges for policymakers and researchers. The degree to which policy actions, for better or worse, have become increasingly discretionary should give us pause as we try to evaluate policy choices in the context of the workhorse New Keynesian framework, especially given its assumption of credibly committed policymakers. Indeed, the Lucas critique would seem to take on new relevance in this post-crisis world. Central banks need to ask if discretionary policies can create incentives that fundamentally change the actions and expectations of consumers, workers, firms, and investors. Characterizing policy in this way also raises issues of whether the institutional design of central banks matters for evaluating monetary policy. I hope my comments today encourage you, as well as the wider community of macroeconomists, to pursue these research questions that are relevant to our efforts to improve our policy choices.
References and Footnotes
Crisis, what crisis? Arrogance and self-satisfaction among macroeconomists, by Simon Wren-Lewis: My recent post on economics teaching has clearly upset a number of bloggers. There I argued that the recent crisis has not led to a fundamental rethink of macroeconomics. Mainstream macroeconomics has not decided that the Great Recession implies that some chunk of what we used to teach is clearly wrong and should be jettisoned as a result. To some that seems self-satisfied, arrogant and profoundly wrong. ...
Let me be absolutely clear that I am not saying that macroeconomics has nothing to learn from the financial crisis. What I am suggesting is that when those lessons have been learnt, the basics of the macroeconomics we teach will still be there. For example, it may be that we need to endogenise the difference between the interest rate set by monetary policy and the interest rate actually paid by firms and consumers, relating it to asset prices that move with the cycle. But if that is the case, this will build on our current theories of the business cycle. Concepts like aggregate demand, and within the mainstream, the natural rate, will not disappear. We clearly need to take default risk more seriously, and this may lead to more use of models with multiple equilibria (as suggested by Chatelain and Ralf, for example). However, this must surely use the intertemporal optimising framework that is the heart of modern macro.
Why do I want to say this? Because what we already have in macro remains important, valid and useful. What I see happening today is a struggle between those who want to use what we have, and those that want to deny its applicability to the current crisis. What we already have was used (imperfectly, of course) when the financial crisis hit, and analysis clearly suggests this helped mitigate the recession. Since 2010 these positive responses have been reversed, with policymakers around the world using ideas that contradict basic macro theory, like expansionary austerity. In addition, monetary policy makers appear to be misunderstanding ideas that are part of that theory, like credibility. In this context, saying that macro is all wrong and we need to start again is not helpful.
I also think there is a danger in the idea that the financial crisis might have been avoided if only we had better technical tools at our disposal. (I should add that this is not a mistake most heterodox economists would make.) ... The financial crisis itself is not a deeply mysterious event. Look now at the data on leverage that we had at the time, but too few people looked at before the crisis, and the immediate reaction has to be that this cannot go on. So the interesting question for me is how those that did look at this data managed to convince themselves that, to use the title from Reinhart and Rogoff’s book, this time was different.
One answer was that they were convinced by economic theory that turned out to be wrong. But it was not traditional macro theory – it was theories from financial economics. And I’m sure many financial economists would argue that those theories were misapplied. Like confusing new techniques for handling idiosyncratic risk with the problem of systemic risk, for example. Believing that evidence of arbitrage also meant that fundamentals were correctly perceived. In retrospect, we can see why those ideas were wrong using the economics toolkit we already have. So why was that not recognised at the time? I think the key to answering this does not lie in any exciting new technique from physics or elsewhere, but in political science.
To understand why regulators and others missed the crisis, I think we need to recognise the political environment at the time, which includes the influence of the financial sector itself. And I fear that the academic sector was not exactly innocent in this either. A simplistic take on economic theory (mostly micro theory rather than macro) became an excuse for rent seeking. The really big question of the day is not what is wrong with macro, but why has the financial sector grown so rapidly over the last decade or so. Did innovation and deregulation in that sector add to social welfare, or make it easier for that sector to extract surplus from the rest of the economy? And why are there so few economists trying to answer that question?
I have so many posts on the state of modern macro that it's hard to know where to begin, but here's a pretty good summary of my views on this particular topic:
I agree that the current macroeconomic models are unsatisfactory. The question is whether they can be fixed, or if it will be necessary to abandon them altogether. I am okay with seeing if they can be fixed before moving on. It's a step that's necessary in any case. People will resist moving on until they know this framework is a dead end, so the sooner we come to a conclusion about that, the better.
As just one example, modern macroeconomic models do not generally connect the real and the financial sectors. That is, in standard versions of the modern model linkages between the disintegration of financial intermediation and the real economy are missing. Since these linkages provide an important transmission mechanism whereby shocks in the financial sector can affect the real economy, and these are absent from models such as Eggertsson and Woodford, how much credence should I give the results? Even the financial accelerator models (which were largely abandoned because they did not appear to be empirically powerful, and hence were not part of the standard model) do not fully link these sectors in a satisfactory way, yet these connections are crucial in understanding why the crash caused such large economic effects, and how policy can be used to offset them. [e.g. see Woodford's comments, "recent events have made it clear that financial issues need to be integrated much more thoroughly into the basic framework for macroeconomic analysis with which students are provided."]
There are many technical difficulties with connecting the real and the financial sectors. Again, to highlight just one aspect of a much, much larger list of issues that will need to be addressed, modern models assume a representative agent. This assumption overcomes difficult problems associated with aggregating individual agents into macroeconomic aggregates. When this assumption is dropped it becomes very difficult to maintain adequate microeconomic foundations for macroeconomic models (setting aside the debate over the importance of doing this). But representative (single) agent models don't work very well as models of financial markets. Identical agents with identical information and identical outlooks have no motivation to trade financial assets (I sell because I think the price is going down, you buy because you think it's going up; with identical forecasts, the motivation to trade disappears). There needs to be some type of heterogeneity in the model, even if just over information sets, and that causes the technical difficulties associated with aggregation. However, with that said, there have already been important inroads into constructing these models (e.g. see Rajiv Sethi's discussion of John Geanakoplos' Leverage Cycles). So while I'm pessimistic, it's possible this and other problems will be overcome.
But there's no reason to wait until we know for sure if the current framework can be salvaged before starting the attempt to build a better model within an entirely different framework. Both can go on at the same time. What I hope will happen is that some macroeconomists will show more humility they've they've shown to date. That they will finally accept that the present model has large shortcomings that will need to be overcome before it will be as useful as we'd like. I hope that they will admit that it's not at all clear that we can fix the model's problems, and realize that some people have good reason to investigate alternatives to the standard model. The advancement of economics is best served when alternatives are developed and issued as challenges to the dominant theoretical framework, and there's no reason to deride those who choose to do this important work.
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.
This post of an email from Mark Gertler in July 2009 argues that modern macro has been mis-characterized:
The current crisis has naturally led to scrutiny of the economics profession. The intensity of this scrutiny ratcheted up a notch with the Economist’s interesting cover story this week on the state of academic economics.
I think some of the criticism has been fair. The Great Moderation gave many in the profession the false sense that we had handled the problem of the business cycle as well as we could. Traditional applied macroeconomic research on booms and busts and macroeconomic policy fell into something of a second class status within the field in favor of more exotic topics.
At the same time, from the discussion thus far, I don’t think the public is getting the full picture of what has been going on in the profession. From my vantage, there has been lots of high quality “middle ground” modern macroeconomic research that has been relevant to understanding and addressing the current crisis.
Here I think, though, that both the mainstream media and the blogosphere have been confusing a failure to anticipate the crisis with a failure to have the research available to comprehend it. Predicting the crisis would have required foreseeing the risks posed by the shadow banking system, which were missed not only by academic economists, but by just about everyone else on the planet (including the ratings agencies!).
But once the crisis hit, broadly speaking, policy-makers at the Federal Reserve made use of academic research on financial crises to help diagnose the situation and design the policy response. Research on monetary and fiscal policy when the nominal interest is at the zero lower bound has also been relevant. Quantitative macro models that incorporate financial factors, which existed well before the crisis, are rapidly being updated in light of new insights from the unfolding of recent events. Work on fiscal policy, which admittedly had been somewhat dormant, is now proceeding at a rapid pace.
Bottom line: As happened in both the wake of the Great Depression and the Great Stagflation, economic research is responding. In this case, the time lag will be much shorter given the existing base of work to build on. Revealed preference confirms that we still have something useful to offer: Demand for our services by the ultimate consumers of modern applied macro research – policy makers and staff at central banks – seems to be higher than ever.
Henry and Lucy Moses Professor of Economics
New York University
[I ... also posted a link to his Mini-Course, "Incorporating Financial Factors Within Macroeconomic Modelling and Policy Analysis"... This course looks at recent work on integrating financial factors into macro modeling, and is a partial rebuttal to the assertion above that New Keynesian models do not have mechanisms built into them that can explain the financial crisis. ...]
Again, it wasn't the tools and techniques we use, we were asking the wrong questions. As I've argued many times, we were trying to explain normal times, the Great Moderation. Many (e.g. Lucas) thought the problem of depressions due to, say, a breakdown in the financial sector had been solved, so why waste time on those questions? Stabilization policy was passé, and we should focus on growth instead. So, I would agree with Simon Wren-Lewis that "we need to recognise the political environment at the time." But as I argued in The Economist, we also have to think about the sociology within the profession that worked against the pursuit of these ideas.
Perhaps Ricardo Cabellero says it better, so let me turn it over to him. From a post in late 2010:
Caballero says "we should be in “broad-exploration” mode." I can hardly disagree since that's what I meant when I said "While I think we should see if the current models and tools can be amended appropriately to capture financial crises such as the one we just had, I am not as sure as [Bernanke] is that this will be successful and I'd like to see [more] openness within the profession to a simultaneous investigation of alternatives."
Here's a bit more from the introduction to the paper:The recent financial crisis has damaged the reputation of macroeconomics, largely for its inability to predict the impending financial and economic crisis. To be honest, this inability to predict does not concern me much. It is almost tautological that severe crises are essentially unpredictable, for otherwise they would not cause such a high degree of distress... What does concern me of my discipline, however, is that its current core—by which I mainly mean the so-called dynamic stochastic general equilibrium approach has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one. ...To be fair to our field, an enormous amount of work at the intersection of macroeconomics and corporate finance has been chasing many of the issues that played a central role during the current crisis, including liquidity evaporation, collateral shortages, bubbles, crises, panics, fire sales, risk-shifting, contagion, and the like.1 However, much of this literature belongs to the periphery of macroeconomics rather than to its core. Is the solution then to replace the current core for the periphery? I am tempted—but I think this would address only some of our problems. The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer. In contrast, the periphery is not nearly as ambitious, and it provides mostly qualitative insights. So we are left with the tension between a type of answer to which we aspire but that has limited connection with reality (the core) and more sensible but incomplete answers (the periphery).This distinction between core and periphery is not a matter of freshwater versus saltwater economics. Both the real business cycle approach and its New Keynesian counterpart belong to the core. ...I cannot be sure that shifting resources from the current core to the periphery and focusing on the effects of (very) limited knowledge on our modeling strategy and on the actions of the economic agents we are supposed to model is the best next step. However, I am almost certain that if the goal of macroeconomics is to provide formal frameworks to address real economic problems rather than purely literature-driven ones, we better start trying something new rather soon. The alternative of segmenting, with academic macroeconomics playing its internal games and leaving the real world problems mostly to informal commentators and "policy" discussions, is not very attractive either, for the latter often suffer from an even deeper pretense-of-knowledge syndrome than do academic macroeconomists. ...
My main message is that yes, we need to push the DSGE structure as far as we can and see if it can be satisfactorily amended. Ask the right questions, and use the tools and techniques associated with modern macro to build the right models. But it's not at all clear that the DSGE methodology is up to the task, so let's not close our eyes -- or worse actively block -- the search for alternative theoretical structures.
Physicists Can Learn from Economists, by Mark Thoma: After attending last year’s Economics Nobel Laureates Meeting in Lindau, Germany, I was very critical of what I heard from the laureates at the meeting. The conference is intended to bring graduate students together with the Nobel Prize winners to learn about fruitful areas for future research. Yet, with all the challenges the Great Recession posed for macroeconomic models, very little of the conference was devoted to anything related to the Great Recession. And when it did come up, the comments were “all over the map.” And some, such as Ed Prescott, were particularly appalling as they made very obvious political statements in the guise of economic analysis. I felt bad for the students who had come to the conference hoping to gain insight about where macroeconomics was headed in the future.
I am back at the meetings this year, but the topic is physics, not economics, and it’s pretty clear that most physicists think they have nothing to learn from lowly economists. That’s true even when they are working on problems in economics and finance.
But they do have something to learn. ...
I think I've made this point repeatedly, though I tend to use the term ideological instead of political, but just in cast the message hasn't gotten through:
Macroeconomics and the Centrist Dodge, by Paul Krugman: Simon Wren-Lewis says something quite similar to my own view about the trouble with macroeconomics: it’s mostly political. And although Wren-Lewis bends over backwards to avoid saying it too bluntly, most – not all, but most – of the problem comes from the right. ...
By now, the centrist dodge ought to be familiar. A Very Serious, chin-stroking pundit argues that what we really need is a political leader willing to concede that while the economy needs short-run stimulus, we also need to address long-term deficits, and that addressing those long-term deficits will require both spending cuts and revenue increases. And then the pundit asserts that both parties are to blame for the absence of such leaders. What he absolutely won’t do is endanger his centrist credentials by admitting that the position he’s just outlined is exactly, exactly, the position of Barack Obama.
The macroeconomics equivalent looks like this: a concerned writer or speaker on economics bemoans the state of the field and argues that what we really need are macroeconomists who are willing to approach the subject with an open mind and change their views if the evidence doesn’t support their model. He or she concludes by scolding the macroeconomics profession in general, which is a nice safe thing to do – but requires deliberately ignoring the real nature of the problem.
For the fact is that it’s not hard to find open-minded macroeconomists willing to respond to the evidence. These days, they’re called Keynesians and/or saltwater macroeconomists. ...
Would Keynesians have been willing to change their views drastically if the experience of the global financial crisis had warranted such a change? I’d like to think so – but we’ll never know for sure, because the basic Keynesian view has in fact worked very well in the crisis.
But then there’s the other side – freshwater, equilibrium, more or less classical macro.
Recent events have been one empirical debacle after another for that view of the world – on interest rates, on inflation, on the effects of fiscal contraction. But the truth is that freshwater macro has been failing empirical tests for decades. Everywhere you turn there are anomalies that should have had that side of the profession questioning its premises, from the absence of the technology shocks that were supposed to drive business cycles, to the evident effectiveness of monetary policy, to the near-perfect correlation of nominal and real exchange rates.
But rather than questioning its premises, that side of the field essentially turned its back on evidence, calibrating its models rather than testing them, and refusing even to teach alternative views.
So there’s the trouble with macro: it’s basically political, and it’s mainly – not entirely, but mainly – coming from one side. Yet this truth is precisely what the critics won’t acknowledge, because that would endanger their comfortable position of scolding everyone equally. It is, in short, the centrist dodge carried over to conflict within economics.
Do we need better macroeconomics? Indeed we do. But we also need better critics, who are prepared to take the risk of actually taking sides for good economics and against dogmatism.
Before adding a few comments, I want to be careful to distinguish the "Keynesianism" discussed above from the New Keynesian model. I'll end up rejecting the standard NK model, but in doing so I am not rejecting Keynesian concepts. As Krugman summarizes, these are things like "the concept of the liquidity trap..., acceptance ... that wages are downwardly rigid – and hence that the natural rate hypothesis breaks down at low inflation.
Let me start by noting that one of the best examples of a macroeconomic model being rejected that I know of is the New Classical model and its prediction that only unanticipated money matters for real variables such as employment and GDP. At first, Robert Barro and others thought the empirical evidence favored this model, but over time it became clear that both anticipated and unanticipated money matters. That is, the prediction was wrong and the model was rejected (it had other problems as well, e.g. explaining both the magnitude and duration of business cycles).
However, the response has been interesting, and it proceeds along the political lines discussed above. Some economists just can't accept that money might matter, and therefore that the government (through the Fed) has an important role to play in managing the economy. And unfortunately, they have acted more like lawyers than scientists in their attempts to discredit New Keynesian and other models that have this implication. After all, markets work, and they work through movements in prices, so a sticky price NK model must be wrong. QED.
Now, it turns out that the New Keynesian model probably is wrong, or at least incomplete, but that's a view based upon evidence rather than ideology. Prior to the crisis, I was a fan of the NK model. Despite what those who couldn't let go of the markets must work point of view argued, I believed this model was better than any other model we had at explaining macroeconomic data. But while the NK model did an adequate job of explaining aggregate fluctuations and how monetary policy will affect the economy in normal times with mild business cycle fluctuations, i.e. from the mid 1980s until recently, it did a downright lousy job of explaining the Great Recession. When it got pushed into new territory by the Great Recession, the Calvo type price stickiness driving fluctuations in the NK model had little to say about the problems we were having and how to fix them.
Thus, from my point of view the Great Recession rejected the standard version of the NK model. Perhaps the model can be fixed by tacking on a financial sector and allowing financial intermediation breakdowns to impact the real economy -- there are models along these lines that people are working to improve -- we will have to see about that. A more general NK model that has one type of fluctuation in normal times -- the standard price stickiness effects -- and occasional large fluctuations from endogenous credit market breakdowns might do the trick (there were models of this type prior to the recession, but they weren't the standard in the profession, and they weren't well-integrated into the general NK structure). So we may be able to find a more general version of the model that can capture both normal and abnormal times. But, then again, we may not and, as I've said many times, we need to encourage the exploration of alternative theoretical structures.
But no matter what happens, some economists just won't accept a model that implies the government can do good through either monetary of fiscal policy, and they work very hard to construct alternatives that don't allow for this. There is less resistance to monetary policy, the evidence is hard to deny so some of these economists will admit that monetary policy can affect the economy positively (so long as the Fed is an independent technocratic body). But fiscal policy is resisted no matter the theoretical and empirical evidence. They have their ideological/political views, and any model inconsistent with them must be wrong.
Update: Noah Smith responds to Paul Krugman here.
Bryan Caplan is tired of being sneered at by "high-status academic economists":
The Curious Ethos of the Academic/Appointee, by Bryan Caplan: High-status academic economists often look down on economists who engage in blogging and punditry. Their view: If you can't "definitively prove" your claims, you should remain silent.
At the same time, though, high-status academic economists often receive top political appointments. Part of their job is to stand behind the administration's party line. They don't merely make claims they can't definitively prove; to keep their positions, appointees have to make claims they don't even believe! Yet high-status academic economists are proud to accept these jobs - and their colleagues admire them for doing so. ...
Noah Smith has something to say about "definitive proof":
"Science" without falsification is no science, by Noah Smith: Simon Wren-Lewis notes that although plenty of new macroeconomics has been added in response to the recent crisis/depression, nothing has been thrown out...
Four years after a huge deflationary shock with no apparent shock to technology, asset-pricing papers and labor search papers and international finance papers and even some business-cycle papers continue to use models in which business cycles are driven by technology shocks. No theory seems to have been thrown out. And these are young economists writing these papers, so it's not a generational effect. ...
If smart people don't agree, it may because they are waiting for new evidence or because they don't understand each other's math. But if enough time passes and people are still having the same arguments they had a hundred years ago - as is exactly the case in macro today - then we have to conclude that very little is being accomplished in the field. The creation of new theories does not represent scientific progress until it is matched by the rejection of failed alternative theories.
The root problem here is that macroeconomics seems to have no commonly agreed-upon criteria for falsification of hypotheses. Time-series data - in other words, watching history go by and trying to pick out recurring patterns - does not seem to be persuasive enough to kill any existing theory. Nobody seems to believe in cross-country regressions. And there are basically no macro experiments. ...
So as things stand, macro is mostly a "science" without falsification. In other words, it is barely a science at all. Microeconomists know this. The educated public knows this. And that is why the prestige of the macro field is falling. The solution is for macroeconomists to A) admit their ignorance more often (see this Mankiw article and this Cochrane article for good examples of how to do this), and B) search for better ways to falsify macro theories in a convincing way.
I have a slightly different take on this. From a column last summer:
What Caused the Financial Crisis? Don’t Ask An Economist, by Mark Thoma: What caused the financial crisis that is still reverberating through the global economy? Last week’s 4th Nobel Laureate Meeting in Lindau, Germany – a meeting that brings Nobel laureates in economics together with several hundred young economists from all over the world – illustrates how little agreement there is on the answer to this important question.
Surprisingly, the financial crisis did not receive much attention at the conference. Many of the sessions on macroeconomics and finance didn’t mention it at all, and when it was finally discussed, the reasons cited for the financial meltdown were all over the map.
It was the banks, the Fed, too much regulation, too little regulation, Fannie and Freddie, moral hazard from too-big-to-fail banks, bad and intentionally misleading accounting, irrational exuberance, faulty models, and the ratings agencies. In addition, factors I view as important contributors to the crisis, such as the conditions that allowed troublesome runs on the shadow banking system after regulators let Lehman fail, were hardly mentioned.
Macroeconomic models have not fared well in recent years – the models didn’t predict the financial crisis and gave little guidance to policymakers, and I was anxious to hear the laureates discuss what macroeconomists need to do to fix them. So I found the lack of consensus on what caused the crisis distressing. If the very best economists in the profession cannot come to anything close to agreement about why the crisis happened almost four years after the recession began, how can we possibly address the problems? ...
How can some of the best economists in the profession come to such different conclusions? A big part of the problem is that macroeconomists have not settled on a single model of the economy, and the various models often deliver very different, contradictory advice on how to solve economic problems. The basic problem is that economics is not an experimental science. We use historical data rather than experimental data, and it’s possible to construct more than one model that explains the historical data equally well. Time and more data may allow us to settle on a particular model someday – as new data arrives it may favor one model over the other – but as long as this problem is present, macroeconomists will continue to hold opposing views and give conflicting advice.
This problem is not just of concern to macroeconomists; it has contributed to the dysfunction we are seeing in Washington as well. When Republicans need to find support for policies such as deregulation, they can enlist prominent economists – Nobel laureates perhaps – to back them up. Similarly, when Democrats need support for proposals to increase regulation, they can also count noted economists in their camp. If economists were largely unified, it would be harder for differences in Congress to persist, but unfortunately such unanimity is not generally present.
This divide in the profession also increases the possibility that the public will be sold false or misleading ideas intended to promote an ideological or political agenda. If the experts disagree, how is the public supposed to know what to believe? They often don’t have the expertise to analyze policy initiatives on their own, so they rely on experts to help them. But when the experts disagree at such a fundamental level, the public can no longer trust what it hears, and that leaves it vulnerable to people peddling all sorts of crazy ideas.
When the recession began, I had high hopes that it would help us to sort between competing macroeconomic models. As noted above, it's difficult to choose one model over another because the models do equally well at explaining the past. But this recession is so unlike any event for which there is existing data that it pushes the models into new territory that tests their explanatory power (macroeconomic data does not exist prior to 1947 in most cases, so it does not include the Great Depression). But, disappointingly, even though I believe the data point clearly toward models that emphasize the demand side rather than the supply side as the source of our problems, the crisis has not propelled us toward a particular class of models as would be expected in a data-driven, scientific discipline. Instead, the two sides have dug in their heels and the differences – many of which have been aired in public – have become larger and more contentious than ever.
Finally, on the usefulness of microeconomic models for macroeconomists -- what is known as microfoundations -- see here: The Macroeconomic Foundations of Microeconomics.
Jeff Frankel takes up the question of inflation targeting versus nominal GDP targeting, and concludes that nominal GDP targeting has many advantages:
Nominal GDP Targeting Could Take the Place of Inflation Targeting, by Jeff Frankel: In my preceding blogpost, I argued that the developments of the last five years have sharply pointed up the limitations of Inflation Targeting (IT)... But if IT is dead, what is to take its place as an intermediate target that central banks can use to anchor expectations?
The leading candidate to take the position of preferred nominal anchor is probably Nominal GDP Targeting. It has gained popularity rather suddenly, over the last year. But the idea is not new. It had been a candidate to succeed money targeting in the 1980s, because it did not share the latter’s vulnerability to shifts in money demand. Under certain conditions, it dominates not only a money target (due to velocity shocks) but also an exchange rate target (if exchange rate shocks are large) and a price level target (if supply shocks are large). First proposed by James Meade (1978), it attracted the interest in the 1980s of such eminent economists as Jim Tobin (1983), Charlie Bean (1983), Bob Gordon (1985), Ken West (1986), Martin Feldstein & Jim Stock (1994), Bob Hall & Greg Mankiw (1994), Ben McCallum (1987, 1999), and others.
Nominal GDP targeting was not adopted by any country in the 1980s. Amazingly, the founders of the European Central Bank in the 1990s never even considered it on their list of possible anchors for euro monetary policy. ...
But now nominal GDP targeting is back, thanks to enthusiastic blogging by Scott Sumner (at Money Illusion), Lars Christensen (at Market Monetarist), David Beckworth (at Macromarket Musings), Marcus Nunes (at Historinhas) and others. Indeed, the Economist has held up the successful revival of this idea as an example of the benefits to society of the blogosphere. Economists at Goldman Sachs have also come out in favor.
Fans of nominal GDP targeting point out that it would not, like Inflation Targeting, have the problem of excessive tightening in response to adverse supply shocks. ...
In the long term, the advantage of a regime that targets nominal GDP is that it is more robust with respect to shocks than the competitors (gold standard, money target, exchange rate target, or CPI target). But why has it suddenly gained popularity at this point in history...? Nominal GDP targeting might also have another advantage in the current unfortunate economic situation that afflicts much of the world: Its proponents see it as a way of achieving a monetary expansion that is much-needed at the current juncture.
Monetary easing in advanced countries since 2008, though strong, has not been strong enough to bring unemployment down rapidly nor to restore output to potential. It is hard to get the real interest rate down when the nominal interest rate is already close to zero. This has led some, such as Olivier Blanchard and Paul Krugman, to recommend that central banks announce a higher inflation target: 4 or 5 per cent. ... But most economists, and an even higher percentage of central bankers, are loath to give up the anchoring of expected inflation at 2 per cent which they fought so long and hard to achieve in the 1980s and 1990s. Of course one could declare that the shift from a 2 % target to 4 % would be temporary. But it is hard to deny that this would damage the long-run credibility of the sacrosanct 2% number. An attraction of nominal GDP targeting is that one could set a target for nominal GDP that constituted 4 or 5% increase over the coming year - which for a country teetering on the fence between recovery and recession would in effect supply as much monetary ease as a 4% inflation target - and yet one would not be giving up the hard-won emphasis on 2% inflation as the long-run anchor.
Thus nominal GDP targeting could help address our current problems as well as a durable monetary regime for the future.
It's hard to figure out how to fix the world if you don't have a reliable model that can explain what went wrong. The optimal money rule in a model depends upon the the way in which changes in monetary policy are transmitted to the real economy. Is it because of price rigidities? Wage rigidities? Information problems? Credit frictions and rationing? The best response to a negative shock to the economy varies depending upon what type of model the investigator is using.
Thus, for the moment we need robust rules. Inflation targeting works well in models with Calvo type price-rigidities, and a Taylor type rule often emerges from models in this general class, but is this the most robust rule in the face of model uncertainty? We don't know the true model of the macroeconomy, that ought to be clear at this point. Does inflation targeting work well when the underlying problem is a breakdown in financial intermediation or other big problems in the financial sector? I'm not at all convinced that it does - some of the best remedies in this case involve abandoning a strict adherence to an inflation target in the short-run.
So, in the best of all worlds I'd prefer to have a model of the economy that works, find the optimal policy rule for that model, and then execute it. In the world we live in, I want robust rules -- rules that work well in a variety of models and in the face of a variety of different types of shocks (or at least recognize that the rule has to change when the source of the problem switches from, say, price rigidities to a breakdown in financial intermediation). One message that comes out of the description of NGDP targeting above is that this approach does appear to be more robust than inflation targeting. It's not always better, in some models a standard Taylor type rule is the best that can be done. But it's becoming harder and harder to believe that the Great Recession can be adequately described by models of this type, and hence hard to believe that we are well served by policy rules that assume price rigidities are the main source of economic fluctuations.
Like Robert Waldmann, I have always taught that the Phillips curve was initially promoted as a permanent tradeoff between inflation and unemployment. It was thought to be a menu of choices that allowed most any unemployment rate to be achieved so long as we were willing to accept the required inflation rate (a look at scatterplots from the UK and the US made it appear that the relationship was stable).
However, the story goes, Milton Friedman argued this was incorrect in his 1968 presidential address to the AEA. Estimates of the Phillips curve that produced stable looking relationships were based upon data from time periods when inflation expectations were stable and unchanging. Friedman warned that if policymakers tried to exploit this relationship and inflation expectations changed, the Phillips curve would shift in a way that would give policymakers the inflation they were after, but the unemployment rate would be unchanged. There would be costs (higher inflation), but not benefits (lower unemployment). When subsequent data appeared to validate Friedman's prediction, the New Classical, rational expectations, microfoundations view of the world began to gain credibility over the old Keynesian model (though the Keynesians eventually emerged with a New Keynesian model that has microfoundations, rational expectations, etc., and overcomes some of the problems with the New Classical model).
Robert Waldmann argues that the premise of this story -- that Samuelson and Solow thought the Phillips curve represented a permanent, exploitable tradeoff between inflation and unemployment -- is wrong:
The Short and Long-Run Phillips Curves: Did Samuelson and Solow claim that the Phillips Curve was a structural relationship showing a permanent tradeoff between inflation and unemployment? James Forder says no.
Paul Krugman, John Quiggin and others (including me) have argued that the one success of the critics of old Keynesian economics is the prediction that high inflation would become persistent and lead to stagflation. The old Keynesian error was to assume that the reduced form Phillips curve was a structural equation -- an economic law not a coincidence.
Quiggin and many others including me have noted that Keynes did not make this old Keynesian error... The old Keynesian error, if it occurred, was made later. I have claimed (in a lecture to surprised students) that it was made by Samuelson and Solow. Was it ?
This is an important question in the history of economic thought, because the alleged error serves as a demonstration of the necessity of basing macroeconomics on microeconomic foundations. For a decade or two (roughly 1980 through roughly 1990 something) it was widely accepted that, to avoid such errors, macroeconomists had to assume that agents have rational expectations even though we don't.
The pattern of a gross error by two economists with impressive track records and an important success based on an approach which has had difficultly forecasting or even dealing with real events ever since made me suspect that the actual claims of Samuelson and Solow have been distorted by their critics. To be frank. this guess is also based on a strong sense that the approach of Friedman and Lucas to rhetoric and debate is more brilliant than fair.
I am very lazy, so I have been planning to Google some for months. I finally did. ... I googled samuelson solow phillips curve
The third hit is the 2010 paper by Forder which discusses Samuelson and Solow (1960) (which I have never read). ... Forder quotes p 189'What is most interesting is the strong suggestion that the relation, such as it is, has shifted upward slightly but noticeably in the forties and fifties'So in the paper which allegedly claimed that the Phillips curve is stable, Solow and Samuelson said it had shifted up. Rather sooner than Friedman and Phelps no ?
So how has it become an accepted fact that Samuelson and Solow said the Phillips curve was stable ? This fact is held to be vitally centrally important to the debate about macroeconomic methodology and it is obviously not a fact at all. How can it be that a claim about what was written in one short clear paper is so central to the debate and that no one checks it ?
They did caption a figure with a Phillips curve "a menu of policy choices" but (OK this is a paraphrase not a quote)After this they emphasized – again – that these 'guesses' related only to the 'next few years', and suggested that a low-demand policy might either improve the tradeoff by affecting expectations, or worsen it by generating greater structural unemployment. Then, considering the even longer run, they suggest that a low-demand policy might improve the efficiency of allocation and thereby speed growth, or, rather more graphically, that the result might be that it 'produced class warfare and social conflict and depress the level of research and technical progress' with the result that the rate of growth would fall.So, finally after months of procrastinating, I spent a few minutes (at home without access to JStor) checking the claim that is central to the debate on macroeconomic methodology and found a very convincing argument that it is nonsense.
If that were possible, this experience would lower my opinion of macroeconomists (as always Robert Waldmann explicitly included).
These videos are from the recent INET conference in Berlin:
Taking Stock of Complexity Economics: Which Problems Does It Illuminate?
- Thomas Homer-Dixon, Director, Waterloo Institute for Complexity and Innovation, University of Waterloo [On Farmer Video]
- Doyne Farmer, Professor at Santa Fe Institute. Presentation Video
- Ricardo Hausmann, Professor of the Practice of Economic Development, Harvard University. Presentation Video
- Mauro Gallegati, Professor of Economics, Polytechnic University of Marche, Ancona. Paper / Presentation Video
- Jean-Philippe Bouchaud, Professor of Physics, École Polytechnique. Presentation Video
- Q&A Video
Does the Effectiveness of Fiscal Stimulus Depend on the Context? Balance Sheet Overhangs, Open Economy Leakages, and Idle Resources
- Giancarlo Corsetti, Professor of Macroeconomics, University of Cambridge. Presentation Video
- Steven Fazzari, Professor of Economics, Washington University in St Louis. Paper / Presentation Video
- Atif Mian, Joe Shoong Chair in International Business, Haas School of Business,University of California at Berkeley. Presentation Video
- Q&A Video
People often object to the idea of a multiplier because it comes from the old Keynesian model. Real macroeconomists, we are told, use DSGE models. But using a DSGE model doesn't matter, the result is essentially the same:
A case for balanced-budget stimulus, by Pontus Rendahl, Vox EU: ...there is little, if any, support in the current macroeconomic literature for the view that expansionary fiscal policy must come at the price of ramping up debt. In fact,... a ‘balanced-budget stimulus’ can set the economy on a steeper recovery path...
[W]hile Ricardian equivalence might have put a nail in the coffin of the Keynesian multiplier, it has certainly not pre-empted the underlying idea: that an increase in government spending may provoke a kickback in output many times the amount initially spent. Indeed, a body of recent research suggests that the fiscal multiplier may be very large, independently of the foresightedness of consumers (Christiano et al 2011, Eggertson 2010). And in a recent study of mine (Rendahl 2012), I identify three crucial conditions under which the fiscal multiplier can easily exceed 1 irrespective of the mode of financing. These conditions, I argue, are met in the current economic situation.
Condition 1. The economy is in a liquidity trap … When interest rates are near, or at, zero, cash and bonds are considered perfect substitutes. ...
Under these peculiar circumstances the laws of macroeconomics change. A dollar spent by the government is no longer a dollar less spent elsewhere. Instead, it’s a dollar less kept in the mattress. And the logic underpinning Say’s law – the idea that the supply of one commodity must add to the immediate demand for another – is broken. ...
Condition 2. … with high unemployment …
So while a dollar spent by the government is not a dollar less spent elsewhere, it is not immediate, nor obvious, whether this implies that government spending will raise output. The second criterion therefore concerns the degree of slack in the economy.
If unemployment is close to, or at, its natural rate, an increase in spending is unlikely to translate to a substantial rise in output. Labor is costly and firms may find it difficult to recruit the workforce needed to expand production. An increase in public demand may just raise prices and therefore offset any spending plans by the private sector.
But at a high rate of unemployment, the story is likely to be different. The large pool of idle workers facilitates recruitment, and firms may cheaply expand business. An increase in public demand may plausibly give rise to an immediate increase in production, with negligible effects on prices. Crowding-out is, under these circumstances, not an imminent threat.
Combining the ideas emerging from Conditions 1 and 2 implies that the fiscal multiplier – irrespective of the source of financing – may be close to 1 (cf Haavelmo 1945).
Condition 3. … which is persistent
But if unemployment is persistent, these ideas take yet another turn. A tax-financed rise in government spending raises output, and lowers the unemployment rate both in the present and in the future. As a consequence, the increase in public demand steepens the entire path of recovery, and the future appears less disconcerting. With Ricardian or forward-looking consumers, a brighter outlook provokes a rise in contemporaneous private demand, and output takes yet another leap. Thus, with persistent unemployment, a tax-financed increase in government purchases sets off a snowballing motion in which spending begets spending.
Where does this process stop? In a stylised framework in which there are no capacity constraints and unemployment displays (pure) hysteresis, I show that the fiscal multiplier is equal to the inverse of the elasticity of intertemporal substitution, a parameter commonly estimated to be around 0.5 or lower. Under such conditions, the fiscal multiplier is therefore likely to lie around 2 or thereabout.
To provide more solid grounds to these arguments, I construct a simple DSGE model with a frictional labour market.1 A crisis is triggered by an unanticipated (and pessimistic) news shock regarding future labour productivity. As forward-looking agents desire to smooth consumption over time, such a shock encourages agents to save rather than to spend, and the economy falls into a liquidity trap. In similarity to the aforementioned virtuous cycle, a vicious cycle emerges in which thrift reinforces thrift, and unemployment rates are sent soaring. ...
There are three important messages [from the work]:
- First, for positive or small negative values of the news shock, the multiplier is zero. The reason is straightforward: With only moderately pessimistic news, the nominal interest rate aptly adjusts to avert a possible liquidity trap, and a dollar spent by the government is simply a dollar less spent by someone else.
- Second, however, once the news is ominous enough, the economy falls into a liquidity trap. The multiplier takes a discrete jump up, and public spending unambiguously raises output. Yet, in a moderate crisis with an unemployment rate of 7% or less, private consumption is at least partly crowded-out.
- Lastly, however, in a more severe recession with an unemployment rate of around 8% or more, the multiplier rises to, and plateaus at, around 1.5. Government spending now raises both output and private consumption, and unambiguously improves welfare...
As evidence that theoretical models -- the DSGE models used in modern macroeconomics -- support fiscal policy, and that the implied multipliers are relatively high in severe recessions, it becomes increasingly clear that much of the opposition to fiscal policy is ideological.
There was more interest in the post on the forecasting ability of DSGE models than I expected, so let me follow up with a post that comes from the NY Fed's Liberty Street Economics blog (this is relatively technical material). The post from Marco Del Negro, Daniel Herbst, and Frank Schorfheide argues that the forecasting ability of DSGE models depends upon "what information you feed into your model: Feed in the right information, and even a dingy DSGE model may not do so poorly at forecasting the recession." However, I don't think this overturns the claim made by Volker Wieland and Maik Wolters in the Vox EU piece linked above that "Both model forecasts and professional forecasts failed to predict the financial crisis. At the current state of knowledge about macroeconomics and the limitations to use all this knowledge in simplified models, large recessions might just be difficult to forecast," but "from the first quarter of 2009 onwards the model-based forecasts perform quite well in predicting the recovery of the US economy." That is, the models do not do very well at forecasting turning points, but once the turning points are known the models do a bit better:
Forecasting the Great Recession: DSGE vs. Blue Chip, by Marco Del Negro, Daniel Herbst, and Frank Schorfheide, Liberty Street: Dynamic stochastic general equilibrium (DSGE) models have been trashed, bashed, and abused during the Great Recession and after. One of the many reasons for the bashing was the models’ alleged inability to forecast the recession itself. Oddly enough, there’s little evidence on the forecasting performance of DSGE models during this turbulent period. In the paper “DSGE Model-Based Forecasting,” prepared for Elsevier’s Handbook of Economic Forecasting, two of us (Del Negro and Schorfheide), with the help of the third (Herbst), provide some of this evidence. This post shares some of our results.
We find that it really matters what information you feed into your model: Feed in the right information, and even a dingy DSGE model may not do so poorly at forecasting the recession. We also compare how the models perform relative to the “Blue Chip Economic Consensus” forecasts. The answer is: About the same, if not better, in fall 2007, in summer 2008, before the Lehman crisis, and at the beginning of 2009–provided one incorporates up-to-date financial data into the DSGE model. (By the way, if you don’t know what a DSGE model is, check out Wikipedia or this primer.)
The chart below shows DSGE model and Blue Chip forecasts for output growth obtained at three junctures of the crisis (the dates coincide with Blue Chip forecast releases): October 10, 2007, right after the turmoil in the financial markets had begun in August of that year; July 10, 2008, not long before the default of Lehman Brothers; and January 10, 2009, at the apex of the crisis. Specifically, each panel shows the current real GDP growth vintage (solid black line), the DSGE model’s mean forecasts (red line), and bands of the forecast distribution (shaded blue areas; these are the 50, 60, 70, 80, and 90 percent bands for the forecast distribution, in decreasing shade), the Blue Chip forecasts (green diamonds), and the actual realizations according to the May 2011 vintage (dashed black line). All the numbers are in percent, quarter-over-quarter.
When interpreting these results, bear in mind that the information available to the DSGE econometrician consists only of the data used for estimation that are available at the time of the forecasts (these data are real GDP growth; inflation; the federal funds rate; total hours worked; growth in real wages, investment, and consumption; and long-run inflation expectations–all at a quarterly frequency). This implies two things. First, the estimation is done in a “real-time” context (as opposed to using revised data–yes, the GDP growth and other data are revised all the time, so the numbers you read about in the paper often end up being quite different from the final numbers). Second, the DSGE econometrician has only lagged information on the state of the economy. For instance, on January 10, 2009, she would only have the information contained in 2008:Q3 data. The information set used by Blue Chip forecasters contains the same data, but also includes a plethora of current indicators on the quarter that just ended (namely, 2008:Q4), information from financial markets, and all the qualitative information available from the media, speeches by government officials, etc.
The chart shows the forecasts for three different DSGE specifications. We call the first one SWπ; this is essentially the popular Smets-Wouters model. The second is the Smets-Wouters model with financial frictions, as in Bernanke et al. and Christiano et al., which we call SWπ-FF. The other difference between this model and SWπ is the use of observations on the Baa-ten-year Treasury rate spread, which captures distress in financial markets.
The last specification is still the SWπ-FF, except that we use observations for the federal funds rate and spreads for the quarter that just ended and for which no National Income and Product Accounts data are yet available (for the January 10, 2009, forecasts, these would be the 2008:Q4 average fed funds rate and spreads). Of course, this information was also available to Blue Chip forecasters. We refer to this specification as SWπ-FF-Current.
The October 10, 2007, Blue Chip forecasts for output were relatively upbeat, at or above .5 percent quarter-over-quarter (that is, 2 percent annualized). The SWπ forecasts were more subdued: The model’s mean forecasts are for growth barely above zero in 2008:Q1, with some probability of negative growth in 2008–that is, a recession. The forecasts for the two SWπ-FF specifications were in line with those of the SWπ model, although a bit more subdued. SWπ-FF-Current in particular assigns a likelihood of negative growth that’s above 25 percent. These models capture the slowdown in the economy that occurred in late 2007 and early 2008, but of course miss the post-Lehman episode. The decline in real GDP that occurred in 2008:Q4 is far in the tails of the forecast distribution for the SWπ model, but less so for the two SWπ-FF specifications.
In July 2008, the Blue Chip forecast and the mean forecast for the SWπ model were roughly aligned. Both foresaw a weak economy–but not a recession–in 2008, and a rebound in 2009. The two SWπ-FF specifications were less sanguine: Their forecast for 2008 was only slightly more pessimistic than the Blue Chip’s for 2008; but, unlike the Blue Chip, these models did not foresee a strong rebound in the economy in 2009. Both models failed to grasp what was coming in 2008:Q4, but at least they put enough probability on negative outcomes that the economy’s strikingly negative growth rate in 2008:Q4 is almost within the 90 percent bands of their forecast distributions.
By January 2009, conditions had worsened dramatically: Lehman Brothers had filed for bankruptcy a few months earlier (September 15, 2008), stock prices had fallen, financial markets were in disarray, and various current indicators had provided evidence that real activity was tumbling. None of this information was available to the SWπ model, which on January 10, 2009, would have used data up to 2008:Q3. Not surprisingly, the model is, so to say, in “la-la land” concerning the path of the economy in 2008:Q4 and after. Unlike the SWπ model, the SWπ-FF specification does not forecast a rebound, but at the same time does not foresee the steep decline in growth that occurred in 2008:Q4. The SWπ-FF uses spreads as an observable, but since the Lehman bankruptcy occurred only later in Q3, it had minor effects on the average Baa-ten-year Treasury rate spread for the quarter, and therefore the SWπ-FF has little indirect information on the turmoil in the financial markets.
The forecasts of the SWπ-FF-Current specification, which uses 2008:Q4 observations on spreads and the fed funds rate, are a completely different story. The model produces about the same forecast as the Blue Chip for 2008:Q4. Considering that the agents in the laboratory DSGE economy had not seen Federal Reserve Chairman Bernanke and Treasury Secretary Paulson on TV painting a dramatically bleak picture of the U.S. economy–which the Blue Chip forecasters likely did see–we regard this as a small achievement. But the main lesson may be that structural models can actually produce decent forecasts as long as you’re using appropriate information.
During last years' INET conference at Bretton Woods I wrote:
Re-Kindleberger: I've learned that new economic thinking means reading old books.
Okay, that's not quite fair, but one of the themes of the Institute for New Economic Thinking conference I'm at has been to reintroduce economic history into the undergraduate and graduate programs. I think that's a good idea, as I've said many times, and not just a course on the history of economic thought. There's also a lot we can learn from studying the economic history of the US and other countries. ...
Update: Brad DeLong adds:
Actually, it is not not quite fair, it is fully fair.
This year I'm trying to understand how the attempt to introduce "new economic thinking" into the profession has evolved over the last three years. One new innovation this year is to invite students to the conference, and as explained here the response was much larger than expected.
I think this is a step in the right direction. Change won't come from the older, established economists who comprise most of the audience -- gray hair is in excess supply here -- change will come from the younger generation. One of them will come up with a new idea, a new model -- something that pushes the established lines in a way that creates momentum as others join in to push the model forward. I don't mean that older economists who are here won't try to change the world, or that they can't provide the spark that generates the new idea. It's also entirely possible that an established economist will come up with the foresight needed to push the frontier.
But I believe that change will come from the young, not the old who are mostly set in their ways and mostly adhere to boundaries defined by the models they already know. You can teach an old dog new ticks, surely, but the established economists are mostly set in their ways and will continue to pursue the familiar and the safe. Starting over with a brand new research agenda when you are in your 40s or 50s is possible I suppose, and I'm sure there are examples of this, but for most it would be too hard and too risky.
Similar risks exist for the young. Setting out in a new direction is hazardous, and if it doesn't pan out tenure will not be granted. It is much easier to contribute to existing knowledge than to create brand new knowledge, and it's much easier to publish as well. So even for the young the established path is very attractive.
That's why bringing students here is important. All these names they've heard, some that they are in awe of -- Nobel prize winners and the like -- are here and they are sending the younger economists an important message. They are signaling that there are are established economists in the very best departments who play key editorial roles in important journals who are receptive to good ideas. When Joe Stiglitz, Amartya Sen, James Heckman and many more names like that stand up and endorse the push to think about economics in a new way, it could give a student with a new idea, and the understandable hesitation that comes with it, the confidence to carry it through. If it works out, there's a good chance some very well-known and respected economists will help to push the idea forward, or at the very least be open-minded, and that provides important motivation to those who might discover something new.
But we have to be careful too. If we push students to try new ideas rather than the established path, and the ideas go nowhere in the end that could do harm to the individual's career. So what we also need to do -- and I admit that I'm not quite sure how to do this -- is to teach the students, as best we can anyway, what a good idea looks like. What makes a new idea more likely to be successful? What makes it more likely to be received by important journals? How can a student know whether to push forward or to back off?
I think the answer is mentorship of the type that exists between a Ph.D. candidate and their advisor, at least a good one. Part of that process is to help the students ask the right questions about their research, how to find the potential holes and fill them, and so on. So all of us who are pushing the profession to investigate new ideas and new directions need to be willing to talk to students about their ideas, ask them the questions they ought to ask themselves, read preliminary drafts that come by email out of the blue, and help in other ways as we can. We need to provide guidance and at the same time not inhibit the search for new and better paths forward, a somewhat delicate task.
It would be better, of course, if older, established economists did this. They have tenure, and that protects them if things don't work out in the end. They won't be given a terminal year and shoved out the door. But, again, I just don't think that's where change will come from. Instead, it's up to the young. The best we can do is to provide guidance freely, encourage the good ideas and redirect the lesser ones, provide motivation, and to the extent possible shield them from those who are only out to protect their own traditional research from new ideas that challenge their research programs.
Finally, for the conference in the future, it would help if the students were better integrated into the general conference instead of being housed in a separate location, watching a live feed of the conference, and visiting for 10 or 15 minutes with the well-known economists who are willing to come over and visit. Allowing students to attend was a last minute innovation, and I'm told this was the best they could do under the circumstances, but hopefully this will change in the future.
I think this mischaracterizes Paul Krugman's view, though he can certainly speak for himself. But the rest is interesting: Volker Wieland and Mail Wolters on the forecasting ability of modern macroeconomic models (this one should have a warning that it is "very wonkish"):
Macroeconomic model comparisons and forecast competitions, by Volker Wieland and Maik Wolters, Vox EU: The failure of economists to predict the Great Recession of 2008–09 has rightly come under attack. The areas receiving most criticism have been economic forecasting and macroeconomic modelling. Distinguished economists – among them Nobel Prize winner Paul Krugman – have blamed developments in macroeconomic modelling over the last 30 years and particularly the use of dynamic stochastic general equilibrium (DSGE) models for this failure.
Key policymakers take a more pragmatic view, namely that there is no alternative to the use of simplified models, but that the development of complementary tools to improve the robustness of policy decisions is required. For example, former ECB President Jean-Claude Trichet said in late 2010:
The key lesson I would draw from our experience is the danger of relying on a single tool, methodology or paradigm. Policymakers need to have input from various theoretical perspectives and from a range of empirical approaches... We do not need to throw out our DSGE and asset-pricing models: rather we need to develop complementary tools to improve the robustness of our overall framework (Trichet 2010).
Against this backdrop, we present a new paper (Wieland et al 2012) in which we propose a comparative approach to macroeconomic policy analysis that is open to competing modelling paradigms. We have developed a database of macroeconomic models that enables a systematic comparative approach to macroeconomic modelling with the objective of identifying policy recommendations that are robust to model uncertainty. This comparative approach enables individual researchers to conduct model comparisons easily, frequently, at low cost, and on a large scale.
The macroeconomic model database is available to download from www.macromodelbase.com and includes over 50 models. We have included models that are used at policy institutions like the IMF, the ECB, the Fed, and in academia. The database includes models of the US economy, the Eurozone, and several multi-country models. Some of the models are fairly small and focus on explaining output, inflation, and interest-rate dynamics. Many others are of medium scale and cover many key macroeconomic aggregates.
This database can be used to compare the implications of specific economic policies across models, but it can also serve as a testing ground for new models. New modelling approaches may offer more sophisticated explanations of the sources of the financial crisis and carry the promise of improved forecasting performance. This promise should be put to a test rather than presumed (see Wieland and Wolters 2011 for details).
In recent years, researchers such as Smets and Wouters (2004), Adolfson et al (2007) and Edge et al (2010) have reported on the strong forecasting performance of DSGE models. However, the existing papers are based on samples with long periods of average volatility and therefore cannot address specifically how well DSGE model-based forecasts perform during recessions and recoveries. With this in mind, we analyse the forecasting performance of models and experts around the five most recent NBER-defined recessions. Turning points pose the greatest challenge for economic forecasters, are of most importance for policymakers, and can help us to understand current limitations of economic forecasting, especially with respect to the recent financial crisis.
We use two small micro-founded New Keynesian models, two medium-size state-of-the-art New Keynesian business-cycle models – often referred to as DSGE models – and for comparison purposes an earlier-generation New Keynesian model (also with rational expectations and nominal rigidities but less strict microeconomic foundations) and a Bayesian VAR model. For each forecast we re-estimate all five models using exactly the data as they were available for professional forecasters when they submitted their forecasts to the SPF. Using these historical data vintages is crucial to ensure comparability to historical forecasts by professionals. We compute successive quarter-by-quarters forecasts up to five quarters ahead for all models.
Predicting the recession of 2008–09
Figure 1 shows forecasts for annualised quarterly real output growth for the recent financial crisis. The black line shows real-time data until the forecast starting point and revised data afterwards. The grey lines show forecasts collected in the SPF and the green line shows their mean. Model forecasts are shown in red. While data for real GDP become available with a lag of one quarter, professional forecasters can use within-quarter information from data series with a higher frequency. In contrast the models can process only quarterly data. To put the models on an equal footing in terms of information with the forecasts of experts, we condition their forecasts on the mean estimate of the current state of the economy from the SPF.
Notes: Solid black line shows annualised quarterly output growth (real-time data vintage until forecast starting point and revised data afterwards), grey lines show forecasts from the SPF, green line shows mean forecast from the SPF, red lines show model forecasts conditional on the mean nowcast from the SPF.
The forecasts shown in the left [top] graph start in the third quarter 2008 and have been computed before the collapse of Lehman brothers. It is apparent that all professional forecasters failed to foresee the downturn. The mean SPF forecast indicates a slowdown of growth in the fourth quarter of 2008 followed by a return to higher growth in the first quarter of 2009. The model-based forecasts would not have performed any better and predict even higher growth rates than most professional forecasters. The graph on the right [bottom] shows that in the fourth quarter of 2008, following the Lehman debacle, professional forecasters drastically revised their assessments of the current state of the economy downwards. Still, growth turned out to be even much lower than estimated. Professional forecasters as well as model forecasts wrongly predicted that the trough had already been reached. While the models predict positive growth rates one quarter ahead, some of the professional forecasters were somewhat more pessimistic. The model-based predictions and the professional forecasters are, however, far from predicting an extreme downturn of as much as 6% output growth.
Given this failure to predict the recession and its length and depth, the widespread criticism of the state of economic forecasting before and during the financial crisis applies to business forecasting experts as well as modern and older macroeconomic models. Professional forecasters, who are able to use information from hundreds of data series including information about financial market conditions and all kinds of different forecasting tools and thus have clear advantage over purely model-based forecasts, were not able to predict the Great Recession either. Thus, there is no reason to single out DSGE models, and favour more traditional Keynesian-style models that may still be more popular among business experts. In particular, Paul Krugman’s proposal to rely on such models for policy analysis in the financial crisis and disregard three decades of economic research is misplaced.
Is there any hope left for economic forecasting and the use of modern structural models in this endeavour?
Figure 2 shows professional and model-based forecasts starting in the first and the second quarter of 2009. Professional forecasters continued to revise their estimated nowcast downwards for the first quarter of 2009 and predict an increase of growth rates afterwards. Interestingly, from the first quarter of 2009 onwards the model-based forecasts perform quite well in predicting the recovery of the US economy. Three-quarters-ahead model-based forecasts dominate expert forecasts in several cases.
Comparing the forecasting accuracy of professional and model-based forecasts
The model forecasts are on average less accurate than the mean SPF forecasts (see Wieland and Wolters 2011 for detailed results). Of course, taking the mean of all forecasts collected in the SPF can increase the forecasting accuracy compared to individual forecasts. Looking at individual forecasts from the SPF we observe that the precision of the different model forecasts is well in line with the precision range of forecasts from professionals.
Computing the mean forecast of all models we obtain a robust forecast that is close to the accuracy of the forecast from the best model. Conditioning the model forecasts on the nowcast of professional forecasters (reported in the paper) can further increase the accuracy of model-based forecasts. Overall, model-based forecasts still exhibit somewhat greater errors than expert forecasts, but this difference is surprisingly small considering that the models only take into account few economic variables and incorporate theoretical restrictions that are essential for evaluations of the impact of alternative policies but often considered a hindrance for effective forecasting.
Both model forecasts and professional forecasts failed to predict the financial crisis. At the current state of knowledge about macroeconomics and the limitations to use all this knowledge in simplified models, large recessions might just be difficult to forecast.
By comparing the forecasts from different models we can hedge against outliers and find predictions that are robust across several models. Our macroeconomic model database provides a testing ground for macroeconomists to compare new models to a large range of existing benchmarks. We thus provide the tools for a comparison with established benchmarks and current forecasting practice as documented in the SPF. It is important to base discussions about competing modelling approaches on a solid basis. In our research we show how such a comparison of different models can be pursued.
[This is an edited version of something I've posted here in the past. I'm hoping others will be motivated to add to (or correct) this history.]
The term "New Classical economics" is often used as though it is one of the dominant models in macroeconomics, but the term has a very specific meaning and refers to a class of models that is no longer popular.
The New Classical model has four important elements, the assumption of rational expectations, the assumption of the natural rate hypothesis, the assumption of continuous market clearing, and an assumption that agents have imperfect information (imperfect information drives cycles in these models). The imperfect information assumption was quite clever in that it allowed proponents of this model to explain correlations between money and income without acknowledging that systematic, predictable monetary or fiscal policy would have any effect at all on real output and employment (put another way, only unexpected changes in monetary policy matter, expected changes are fully neutralized by private sector responses to the policy).
The New Classical model is important for the foundation it provided for later models, the movement in macroeconomics toward microeconomic foundations and the use of rational agents within macro models in particular, but the model itself could not simultaneously explain both the duration and magnitude of actual cycles. It also had difficulty explaining some key correlations among macroeconomic variables, and it was difficult to understand why a market for the absent information did not develop if the consequences of imperfect information were as large as the New Classical model implied. If you are at Chicago where these models were popularized, markets pop up as needed and the fact that there was no market to help agents avoid the confusion that drives the New Classical model was a strike against them. In addition, one of the model's key results that only unexpected changes in money can affect real variables did not hold up when taken to the data (though there are still a few die-hards on this). So the profession moved on.
The New Classical model had replaced the old Keynesian model after the old Keynesian models' shortcomings were blamed, at least in part, for the problems we had in the 1970s. The model was also abandoned for theoretical reasons that will be described in a moment.
But while the New Classical economists were having their day in the sun, the Keynesians were quietly working behind the scenes to fix the problems that caused the old Keynesian model to go out of favor (or not so quietly in a few cases). The old Keynesian model had a poor model of expectations. If expectations were considered at all, they were usually modeled as a naive adaptive process. In addition, it was not clear that the assumptions and relationships embedded within the old Keynesian model were consistent with optimizing behavior on behalf of households and firms. The New Keynesian model solved this by deriving macroeconomic relationships from microeconomic optimizing behavior, and by adopting the rational expectations framework. And the New Keynesians made one other important change. In order for systematic monetary policy (e.g. following a Taylor rule) to affect real variables such as output and employment, there must be some type of friction that prevents the economy from immediately moving to it long run equilibrium value. The friction in the New Classical model is informational, agents optimize given the information that they have, but because the information is imperfect the decisions they make take the economy away from its optimal long-run path. In the New Keynesian model the friction that gives monetary policy its power is sluggish movement of prices and wages (generally modeled through something called the Calvo pricing rule). This friction is somewhat controversial and the precise degree of price rigidity in the economy is the subject of intense research.
Many people who use the term New Classical -- a natural counterpart to the term New Keynesian -- seem to have in mind some version of a Real Business Cycle model where prices are, in fact, assumed to be fully flexible, agents are rational, all markets clear, policy is neutral, etc. In these models, actual output is always equal to potential (so there's no need for policy to do anything but maximize the growth of potential output, hence the supply-side orientation of advocates of this approach). Potential output moves over time in response to productivity and taste shocks, i.e. supply shocks, and that is the source of business cycles in thsi class of models. Demand shocks, which drive business cycles in New Keynesian models (as well as New Classical and Old Keynesian models) have little or no effect on real output and employment.
I was recently labeled as a "neoclassical" economist, so let me end by making it clear that not all of us believe that assuming fully flexible prices and continuous market clearing is the proper way to model the economy. Prior to the crisis I was an advocate of sticky price/sticky wage New Keynesian models, and quite resistant to pure Real Business Cycle approaches. But I am less of a fan of the New Keynesian model than I once was. I still think it's a good model to explain mild fluctuations of the type we had during the Great Moderation, and I still think the tools and techniques macroeconomists use, what is collectively labeled DSGE models, are the right way to go (though I would still like to see competing models challenge this view). But to be useful in a crisis like we just had the models have to be amended to better connect the real and financial sectors -- the connection between breakdowns in financial intermediation and the real economy needs to be improved -- and people are working hard to try to solve problems. Will they succeed? I certainly hope so.
In a tweet yesterday, one that Steve Keen posts here, I say (after noting that New Keynesian models are DSGE):
...RBC models can surely be categorized as DSGE
Keen then says:
Thoma believe[s] that DSGE models exclusively refer to NK models?
Uh, no. Just read what I said. I can't even call this a nice try. He also says:
I won’t accept Thoma’s excuse for your [Krugman's] behaviour—and nor do some of his own followers
First, it wasn't about Krugman at all. It was about Keen saying something wrong. Second, what do my followers really say (only one actually replied)? That Keen is wrong:
I agree with you. Two very different things
Hence your point with Keen was a good one. I am constantly telling my students to be precise in the use of their "language"
If the only way to win an argument is to misrepresent people to this extent, it isn't a win. To use Keen's term, it's a gigantic "FAIL".
Now let me back up. My point was fairly simple, I said that I didn't consider New Classical models (i.e. information confusion models) DSGE models. Keen claimed they were in trying to defend himself against a charge made by Krugman, and I disagreed. To me, DSGE refers to something different (mainly, but not exclusively, the classes of NK and RBC models). But I also said that I could see how someone could make this argument, especially since NC models provided the intellectual foundation for modern macro models (RE, microfoundations, and equilibrium analysis mainly). Nevertheless, I don't think the term DSGE applies to NC models.
More to the point, however, Keen showed a lack of familiarity with modern models and I am still not sure that he knows the difference between NK, NC, RBC, and NM models, NC and RBC in particular (New Keynesian, New Classical, Real Business Cycle, and New Monetarist). The discussion by Keen just before his figure 3 -- the part that quotes Wikipedia for the authoratative answer as to whether NC models are DSGE -- doesn't even mention NC models. And Keen seems to imply that RBC=NC. That is, he thinks the statement in the Wikipedia quote that "Real business cycle (RBC) theory builds on the neoclassical growth model" means that the RBC model is the same as the NC model. That's wrong (the NC model generates non-neutralities and business cycles through information confusion, the RBC model is driven by productivity and preference shocks, information confusion is not present in these models -- also, the NC model has faded since its heyday a couple of decades or so ago and it's nota model that macroeconmists generally use today).
As I said in a series of tweets that set this off, I think it's possible to debate this. I don't think DSGE applies to NC, but there's at least an argument to be made. But a rational, reasoned argument is not the response I got (except for quotes from Wikipedia that don't actually support the argument). Instead, Keen misrepresened what I said to try to win the argument. I doubt it was intentional, I'll gi9ve the benefit of doubt -- it's more likely that the finer points were not understood.
Mostly I just want to corrct the record, not start a big fight. I don't like having people read that I think DSGE only applies to NK models when that is clearly not what I said. But let me at least try to end on a constructive note by posing a question: What defines a DSGE model? Is the NC model a DSGE model?
Update: Menzie Chinn emails what I think is the correct distinction. DSGE is a technique that can be applied to models of various types:
I have been out on the road and not following closely debates, but I am a bit mystified by the argument over DSGE definitions.
I think of New Keynesian, New Classical, and Real Business Cycle models as approaches. I think of DSGE primarily as a numerical methodology often involving calibration, but not always, but is solved out somehow, that implements one of those approaches.
One can have a New Keynesian model that is just for explication purposes – I think of Blanchard Kiyotaki. Or one can write out a dynamic stochastic general equilibrium model that incorporates New Keynesian attributes (monopolistic pricing, sticky prices, intertemporal optimization) or RBC attributes (flex price, big technology shocks with AR1s in the error terms of the shock processes). Wouldn’t that be a clearer way of differentiating?
Yes, but again, it would be very unusual to bring these techniques to the old-fashioned New Classical models. Thus, calling the NC model a DSGE model is quite a stretch.
Update: See Describing DSGEs too.
When Narayana Kocherlakota gave this speech based on this paper, a paper that uses a very simply model that is essentially an IS curve analysis, the economists who believe strongly in the science of monetary policy were appalled. How could Narayana have crossed over to the dark side?
I defended him, and it leads me into a broader discussion of the problems of doing what I've called "real-time economic analysis." Let me start with something I wrote about this awhile back:
Economic research is largely backward looking. After the fact – when all of the data has been collected and the revisions to the data are complete – economists examine data on, say, a financial crisis, and then figure out what caused the economy to become so sick. Once the cause has been determined, which may involve the construction of new theoretical frameworks, they tell us how to avoid it happening again, i.e. the particular set of policies that would have prevented or attenuated the damage.
But the internet and blogs are changing what we do, and to some extent we now act like emergency room physicians rather than pathologists who have the time to carefully examine data from tests, etc., determine what went wrong, and then recommend how to avoid problems in the future. When the financial crisis hit so unexpectedly, it was like a patient showed up at the emergency room very sick and in need of immediate diagnosis and care. We had to reach into our bag of macroeconomic models, choose the one that was correct for this question, and then use it to both diagnose the problems and prescribe policies to fix them. There was no time for a careful retrospective analysis that patiently determined the cause and then went to work on the potential policy responses.
That turned out to be much harder than expected. Our models and cures are not designed for that type of use. What data should we look at to make an immediate diagnosis? What tests should we conduct to give us data on what is wrong with the economy? If we aren’t sure what the cause is but immediate action is needed to save the economy from getting very sick, what is the equivalent of using broad spectrum antibiotics and other drugs to attack unknown problems? The development of blogs puts economists in real-time contact with the public, press, and policymakers, and when a crisis hits, traffic spikes as people come looking for answers.
Blogs are a start to solving the problem of real-time analysis, but we need to do a much better job than we are doing now at providing immediate answers when they are needed. If Lehman is failing and the financial sector is going down with it, or if Europe is in trouble, we need to know what to do right now. It won’t help to figure that out months from now and then publish the findings in a journal article. That means the discipline has to adjust from being backward looking pathologists with plenty of time to determine causes and cures to an emergency room mode where we can offer immediate advice. Blogs are an integral part of that process.
Policymakers at the Federal Reserve face this problem continuously. They must confront changes in the data that aren't always well understood in near real time, and make policy decisions every few weeks. If pre-existing models apply to the problem at hand, great, it can be used to guide policy decisions. But what should policymakers do when they are faced with an important decision about how to react to a large shock, and they reach into their black bag of models and none of them seem to fit?
One approach is what Paul Krugman does so well, something Narayana Kocherlakota seems to also be doing. Reach for simple models that get to the heart of the problem and hence offer guidance about what to do next. These models are not intended to explain the world generally, they are not "science" in that respect, they are intended to shine a light and provide guidance on a very narrow issue. It takes considerable skill to do this since, as I argued yesterday, it requires the practitioner to thoroughly understand the pitfalls of the simple approach, the ways in which it could go wrong.
So I think Narayana and others are correct to reach for simple models for guidance when they are faced with a decision that existing models do not address very well and there's not time to build a full-blown model of the problem.
My call to those who object that this approach is not "science," those who look down their noses at people like Krugman and Kocherlakota when they adopt this approach, is this. What is the scientific way to diagnose the economy is real-time, and confront unknown or uncertain pathologies? As I noted in another essay that discusses this problem, doctors have tests that can be done very quickly to provide a diagnosis, and they can they use broad-spectrum drugs and other approaches to try to heal the patient when the tests point to unknown causes.
What tests should we do that are quick and informative? There are lots of data, but what should we be examining to try to diagnose problems effectively before they get really bad? If we detect a problem, and don't fully understand it, what's the most robust way to attack it? What policies tend to work on a broad variety of underlying causes? Are there tests that can guide us to the correct robust policy?
My reaction when the crisis hit, and ever since, was to recommend a "portfolio of policies." People who say only monetary policy will work, or only fiscal policy will work, blah, blah, blah are talking with more confidence than was justified by the models they are using. I decided early on that I really didn't know for sure which macroeconomic model was best. I had my preferences, strong preferences, but I couldn't say for sure that the model I preferred was correct. And it didn't really apply very well to the financial crisis in any case.
So, I thought, why not do what a doctor would do and give a broad spectrum drug that tends to work no matter the cause. There is the danger of side effects. If we aren't sure about which policy will work and we give full doses of both monetary and fiscal policy only to have them both work, the side effect of inflation could occur as the economy heals. But to me the side effect was far less worrisome than the disease itself, and in any case the side effect could be controlled by backing off the dosage once the patient was up and about once again. But what are the optimal weights for monetary and fiscal policy in such a situation? What else ought to be in the portfolio of policies (e.g. policies that can help even if the problem is structural rather than cyclical). What guidance can we give policymakers?
Those who believe in the science of monetary policy can sneer at the Krugman/Kocherlakota approach all they want, but there's a real (time) problem to be solved here and we could use their help. As I said above this is an area where the Fed has considerable experience, real-time analysis is a large part of what they do, and my push for Federal Reserve banks to interact more through blogs is partly for this reason. Hearing how Federal Reserve policymakers approach these problems would be useful.
But it would also be useful if the profession more generally would get aboard and help us understand how to better solve the difficult questions that arise when decisions must be made based upon only a partial understanding of the problem that is affecting the economy. In the long-run it's still important to build new, full blown models that can explain the problem and provide guidance. Macroeconomists are certainly doing that presently as they try to provide better models of how a breakdown in financial intermediation can impact the real economy than we had before, and so on. But work on how to better conduct real-time analysis is not getting as much attention, and that's something that needs to change.
knzn explains why he is a Keynesian:
Bullish It, by knzn: ...Smith’s blog leads me to think about the issue of macroeconomics as a field. It seems (especially from the comment thread) that the Old Keynesians and the New Monetarists are at each other’s throats (but, interestingly, the newly christened Market Monetarists – who have some claim to being the legitimate intellectual heirs of the Old Monetarists – basically seem to be on the same side as the Old Keynesians on the major issues here; and the New Keynesians can break for either side depending on whether they’re more Keynesian or more New). Obviously I’m more sympathetic to the Old Keynesians than the New Monetarists, otherwise maybe my pseudonym would be “dsge” instead of “knzn.”
Here’s my take: to begin with, economics is basically bulls**t. I mean, it’s necessary bulls**t, sometimes even useful bulls**t, but I’m extremely skeptical of people who think economics is a science or that it could be a science. We have to make policy decisions (and investment decisions and personal consumption decisions etc.), and we have to have some basis for making them. We could just use intuition, and we often do, but it’s helpful to use logical thought and empirical data also, and systematic study using fields like economics can help us to clarify our intuition, our logical arguments, and our interpretation of the empirical data. The same way that bulls**t discussions that don’t make any pretense at being science can help.
Economics is bulls**t because it relies on the premise that human beings behave in a systematic way, and they don’t. Once you have done enough research to convince yourself that they behave in a certain way, they will change and start behaving in another way. Particularly if they read your research and realize that you’re trying to manipulate them by expecting them to continue behaving the way they have. But even if they don’t read your research, they may change the way they behave just because the zeitgeist changes – cultural sunspots, if you will.
The last paragraph may vaguely remind you of the Lucas critique. Lucas basically said that macroeconomics (as it was being practiced at the time) was bulls**t, but he held out the hope that it could receive micro-foundations that wouldn’t be bulls**t. The problem with Lucas’ argument, though, is that microeconomics is also bulls**t. And Noah Smith, writing some 36 years after the Lucas critique and observing its unwholesome results, takes it one step further by saying, if I may paraphrase, “Yes, the microeconomics upon which modern macro has now been founded is indeed bulls**t, but if we do the micro right, then we can come up with non-bulls**t macro.”
Yeah, I doubt it. Maybe we can come up with slightly better macro than what we’ve got now, but the underlying micro is never going to be right. Experimental results involving human subjects are inevitably subject to the micro version of the Lucas critique: once the results become well-known, they become part of a new environment that determines a new set of behavior. And the zeitgeist will screw with them also. And so on. And in any case, even if the results were robust, I’m skeptical that we can really build them into a macro model or that it would be worth the trouble even if we could. Economics will always be bulls**t.
Now there’s a case for doing rigorous bulls**t, at least as a potentially useful exercise. That’s what I think DSGE modeling is: it’s a potentially useful exercise in rigorous bulls**t. And I don’t begrudge the work of people like Steve Williamson: I think there's some rigorous bulls**t there that may be worth talking about. But in general, when it comes to bulls**t, there is not a monotonic relationship between rigor and usefulness. And to put all your eggs in the rigorous bulls**t basket – not only that, but in one particular type of rigorous bulls**t basket, because rigor does not live by rational equilibrium alone – is something that not even Pudd’nhead Wilson could advocate.
So I’m going to stick with sloppy Old Keynesian models as my main mode of macroeconomic analysis. They’re bulls**t. They’re not rigorous bulls**t. But as bulls**t goes, they’re pretty useful. A lot more useful than unaided intuition. And they’re easy enough to understand that we can have a reasonable idea of where their unrealistic assumptions are likely to lead us astray. Of course all economic models have unrealistic assumptions, but hopefully our intuition allows us to correct for that condition when applying the models to the real world. If the model is too complicated for the typical economist to understand how the assumptions generate the conclusions, then the unrealism becomes a real problem.
When you need an answer fast to a question that the newer models don't address sufficiently, and there are many important questions that fall into this category, and when you don't have time to build a new model before needing to answer -- a situation policymakers face constantly -- then the Old Keynesian IS-LM/MP model can fill the void. It is very easy to use for most questions, in part because it has been explored so thoroughly over the decades. I suspect knzn faces this situation often in his job in finance, i.e. he needs an answer today, wants a model for guidance, doesn't have time to build a full blown dsge model, simulate it, etc. and the IS-LM/MP model can fill the void.
But if this approach is adopted, I think it's important not to forget the lessons of the more modern models. For example, the old and new IS curves differ by how they handle expectations of the future. The new model accounts for this, the old models don't. If changes in expectations about the future are arguably unimportant, and other important differences in the models are similarly unimportant, then the old IS-LM/MP model can provide a good approximation. But when these expectations are important, using the old models can cause you to miss important feedback effects from the expected future to the actual present.
The best of both worlds is, I think, better than either alone. The art is knowing what is "best" in each of the two models.
This is a bit on the wonkish side, but since I've talked a lot about the difficulties that heterogeneous agents pose in macroeconomics, particularly for aggregation, I thought I should note this review of models with heterogeneous agents:
Macroeconomics with Heterogeneity: A Practical Guide, by Fatih Guvenen, Economic Quarterly, FRB Richmond: This article reviews macroeconomic models with heterogeneous households. A key question for the relevance of these models concerns the degree to which markets are complete. This is because the existence of complete markets imposes restrictions on (i) how much heterogeneity matters for aggregate phenomena and (ii) the types of cross-sectional distributions that can be obtained. The degree of market incompleteness, in turn, depends on two factors: (i) the richness of insurance opportunities provided by the economic environment and (ii) the nature and magnitude of idiosyncratic risks to be insured. First, I review a broad collection of empirical evidence—from econometric tests of "full insurance," to quantitative and empirical analyses of the permanent income ("self-insurance") model that examine how it fits the facts about life-cycle allocations, to studies that try to directly measure where economies place between these two benchmarks ("partial insurance"). The empirical evidence I survey reveals significant uncertainty in the profession regarding the magnitudes of idiosyncratic risks, as well as whether or not these risks have increased since the 1970s. An important difficulty stems from the fact that inequality often arises from a mixture of idiosyncratic risk and fixed (or predictable) heterogeneity, making the two challenging to disentangle. Second, I discuss applications of incomplete markets models to trends in wealth, consumption, and earnings inequality both over the life cycle and over time, where this challenge is evident. Third, I discuss "approximate" aggregation—the finding that some incomplete markets models generate aggregate implications very similar to representative-agent models. What approximate aggregation does and does not imply is illustrated through several examples. Finally, I discuss some computational issues relevant for solving and calibrating such models and I provide a simple yet fully parallelizable global optimization algorithm that can be used to calibrate heterogeneous agent models. View Full Article.
I am here today:
Structural and Cyclical Elements in Macroeconomics
Federal Reserve Bank of San Francisco
Janet Yellen Conference Center, First Floor
March 16, 2012
Morning Session Chair: John Fernald, Federal Reserve Bank of San Francisco
8:10 A.M. Continental Breakfast
8:50 A.M. Welcoming Remarks: John Williams, Federal Reserve Bank of San Francisco
9:00 A.M. Jinzu Chen, International Monetary Fund, Prakash Kannan, International Monetary Fund, Prakash Loungani, International Monetary Fund Bharat Trehan, Federal Reserve Bank of San Francisco New Evidence on Cyclical and Structural Sources of Unemployment (PDF - 462KB)
Discussants: Steven Davis, University of Chicago Booth School of Business, Valerie Ramey, University of California, San Diego
10:20 A.M. Break
10:40 A.M. Robert Hall, Stanford University Quantifying the Forces Leading to the Collapse of GDP after the Financial Crisis (PDF - 826KB) Discussants: Antonella Trigari, Università Bocconi, Roger Farmer, University of California, Los Angeles
12:00 P.M. Lunch – Market Street Dining Room, Fourth Floor
Afternoon Session Chair: Eric Swanson, Federal Reserve Bank of San Francisco
1:15 P.M. Charles Fleischman, Federal Reserve Board, John Roberts, Federal Reserve Board, From Many Series, One Cycle: Improved Estimates of the Business Cycle from a Multivariate Unobserved Components Model (PDF - 302KB)
Discussants: Carlos Carvalho, Pontificia Universidade Católica, Rio de Janeiro, Ricardo Reis, Columbia University
2:35 P.M. Break
2:50 P.M. Christopher Carroll, Johns Hopkins University, Jiri Slacalek, European Central Bank, Martin Sommer, International Monetary Fund Dissecting Saving Dynamics: Measuring Credit, Wealth, and Precautionary
Effects (PDF - 1.18MB)
Discussants: Karen Dynan, Brookings Institution, Gauti Eggertsson, Federal Reserve Bank of New York
4:10 P.M. Break
4:25 P.M. Andreas Fuster, Harvard University, Benjamin Hebert, Harvard University, David Laibson, Harvard University Natural Expectations, Macroeconomic Dynamics, and Asset Pricing (PDR - 663KB)
Discussants: Yuriy Gorodnichenko, University of California, Berkeley, Stefan Nagel, Stanford Graduate School of Business
5:45 P.M. Reception – West Market Street Lounge, Fourth Floor
6:30 P.M. Dinner – Market Street Dining Room, Fourth Floor, Introduction: John Williams, Federal Reserve Bank of San Francisco, Speaker: Peter Diamond, Massachusetts Institute of Technology Unemployment and Debt