Category Archive for: Macroeconomics [Return to Main]

Tuesday, April 08, 2014

A Model of Secular Stagnation

Gauti Eggertson and Neil Mehotra have an interesting new paper:

A Model of Secular Stagnation, by Gauti Eggertsson and Neil Mehrotra: 1 Introduction During the closing phase of the Great Depression in 1938, the President of the American Economic Association, Alvin Hansen, delivered a disturbing message in his Presidential Address to the Association (see Hansen ( 1939 )). He suggested that the Great Depression might just be the start of a new era of ongoing unemployment and economic stagnation without any natural force towards full employment. This idea was termed the ”secular stagnation” hypothesis. One of the main driving forces of secular stagnation, according to Hansen, was a decline in the population birth rate and an oversupply of savings that was suppressing aggregate demand. Soon after Hansen’s address, the Second World War led to a massive increase in government spending effectively end- ing any concern of insufficient demand. Moreover, the baby boom following WWII drastically changed the population dynamics in the US, thus effectively erasing the problem of excess sav- ings of an aging population that was of principal importance in his secular stagnation hypothesis.
Recently Hansen’s secular stagnation hypothesis has gained increased attention. One obvious motivation is the Japanese malaise that has by now lasted two decades and has many of the same symptoms as the U.S. Great Depression - namely dwindling population growth, a nominal interest rate at zero, and subpar GDP growth. Another reason for renewed interest is that even if the financial panic of 2008 was contained, growth remains weak in the United States and unemployment high. Most prominently, Lawrence Summers raised the prospect that the crisis of 2008 may have ushered in the beginning of secular stagnation in the United States in much the same way as suggested by Alvin Hansen in 1938. Summers suggests that this episode of low demand may even have started well before 2008 but was masked by the housing bubble before the onset of the crisis of 2008. In Summers’ words, we may have found ourselves in a situation in which the natural rate of interest - the short-term real interest rate consistent with full employment - is permanently negative (see Summers ( 2013 )). And this, according to Summers, has profound implications for the conduct of monetary, fiscal and financial stability policy today.
Despite the prominence of Summers’ discussion of the secular stagnation hypothesis and a flurry of commentary that followed it (see e.g. Krugman ( 2013 ), Taylor ( 2014 ), Delong ( 2014 ) for a few examples), there has not, to the best of our knowledge, been any attempt to formally model this idea, i.e., to write down an explicit model in which unemployment is high for an indefinite amount of time due to a permanent drop in the natural rate of interest. The goal of this paper is to fill this gap. ...[read more]...

In the abstract, they note the policy prescriptions for secular stagnation:

In contrast to earlier work on deleveraging, our model does not feature a strong self-correcting force back to full employment in the long-run, absent policy actions. Successful policy actions include, among others, a permanent increase in inflation and a permanent increase in government spending. We also establish conditions under which an income redistribution can increase demand. Policies such as committing to keep nominal interest rates low or temporary government spending, however, are less powerful than in models with temporary slumps. Our model sheds light on the long persistence of the Japanese crisis, the Great Depression, and the slow recovery out of the Great Recession.

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Friday, February 14, 2014

'Are New Keynesian DSGE Models a Faustian Bargain?'

Simon Wren-Lewis:

 Are New Keynesian DSGE models a Faustian bargain?: Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events. Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build? ...

Wednesday, February 12, 2014

'Is Increased Price Flexibility Stabilizing? Redux'

I need to read this:

Is Increased Price Flexibility Stabilizing?, by Redux Saroj Bhattarai, Gauti Eggertsson, and Raphael Schoenle, NBER Working Paper No. 19886 February 2014 [Open Link]: Abstract We study the implications of increased price flexibility on output volatility. In a simple DSGE model, we show analytically that more flexible prices always amplify output volatility for supply shocks and also amplify output volatility for demand shocks if monetary policy does not respond strongly to inflation. More flexible prices often reduce welfare, even under optimal monetary policy if full efficiency cannot be attained. We estimate a medium-scale DSGE model using post-WWII U.S. data. In a counterfactual experiment we find that if prices and wages are fully flexible, the standard deviation of annualized output growth more than doubles.

Thursday, February 06, 2014

'How the New Classicals Drank the Austrians' Milkshake'

In a tweet, Roger Farmer says "This is a very good summary of Austrian vs classical Econ":

How the New Classicals drank the Austrians' milkshake: The "Austrian School of Economics" is still a name that is lovingly invoked by goldbugs, Zero Hedgies, Ron Paulians, and various online rightists. But as a program of scholarship it seems mostly dead. There is a group of "Austrians" at George Mason and NYU trying to revive the school by evolving it in the direction of mainstream econ, and then there is the Mises Institute, which contents itself with bathing in the fading glow of the works of the Old Masters. But in the main, "Austrian economics" is an ex-thing. It seems to me that the Austrian School's demise came not because its ideas were rejected and marginalized, but because most of them were co-opted by mainstream macroeconomics. The "New Classical" research program of Robert Lucas and Ed Prescott shares just enough similarities with the Austrian school to basically steal all their thunder. The main points being...

Wednesday, January 29, 2014

'No, Micro is not the "Good" Economics'

Greg Ip at The Economist:

No, micro is not the "good" economics: If asked to compile a list of economists’ mistakes over the last decade, I would not know where to start. Somewhere near the top would be failure to predict the global financial crisis. Even higher on the list would be failure to agree, five years later, on its cause. Is this fair? Not according to Noah Smith: these, he says, were not errors of economics but of macroeconomics. Microeconomics is the good economics, where economists by and large agree, conduct controlled experiments that confirm or modify established theory and lead to all sorts of welfare-enhancing outcomes.
To which I respond with two words: minimum wage..., ask any two economists – macro, micro, whatever – whether raising the minimum wage will reduce employment for the low skilled, and odds are you will get two answers. Sometimes more. (By contrast, ask them if raising interest rates will reduce output within a year or two, and almost all – that is, excepting real-business cycle purists – will say yes.)
Are there reasons a higher minimum wage will not have the textbook effect? Of course. ... But microeconomists are kidding themselves if they think this plethora of plausible explanations makes their branch of economics any more scientific or respectable than standard macroeconomics. ...

[There's quite a bit more in the original.]

Saturday, January 25, 2014

'Is Macro Giving Economics a Bad Rap?'

Chris House defends macro:

Is Macro Giving Economics a Bad Rap?: Noah Smith really has it in for macroeconomists. He has recently written an article in The Week in which he claims that macro is one of the weaker fields in economics...

I think the opposite is true. Macro is one of the stronger fields, if not the strongest ... Macro is quite productive and overall quite healthy. There are several distinguishing features of macroeconomics which set it apart from many other areas in economics. In my assessment, along most of these dimensions, macro comes out looking quite good.

First, macroeconomists are constantly comparing models to data. ... Holding theories up to the data is a scary and humiliating step but it is a necessary step if economic science is to make progress. Judged on this basis, macro is to be commended...

Second, in macroeconomics, there is a constant push to quantify theories. That is, there is always an effort to attach meaningful parameter values to the models. You can have any theory you want but at the end of the day, you are interested not only in idea itself, but also in the magnitude of the effects. This is again one of the ways in which macro is quite unlike other fields.

Third, when the models fail (and they always fail eventually), the response of macroeconomists isn’t to simply abandon the model, but rather they highlight the nature of the failure.  ...

Lastly, unlike many other fields, macroeconomists need to have a wide array of skills and familiarity with many sub-fields of economics. As a group, macroeconomists have knowledge of a wide range of analytical techniques, probably better knowledge of history, and greater familiarity and appreciation of economic institutions than the average economist.

In his opening remarks, Noah concedes that macro is “the glamor division of econ”. He’s right. What he doesn’t tell you is that the glamour division is actually doing pretty well. ...

Saturday, January 18, 2014

'Paul Krugman & the Nature of Economics'

Chris Dillow:

Paul Krugman & the nature of economics: Paul Krugman is being accused of hypocrisy for calling for an extension of unemployment benefits when one of his textbooks says "Generous unemployment benefits can increase both structural and frictional unemployment." I think he can be rescued from this charge, if we recognize that economics is not like (some conceptions of) the natural sciences, in that its theories are not universally applicable but rather of only local and temporal validity.
What I mean is that "textbook Krugman" is right in normal times when aggregate demand is highish. In such circumstances, giving people an incentive to find work through lower unemployment benefits can reduce frictional unemployment (the coexistence of vacancies and joblessness) and so increase output and reduce inflation.
But these might well not be normal times. It could well be be that demand for labour is unusually weak; low wage inflation and employment-population ratios suggest as much. In this world, the priority is not so much to reduce frictional unemployment as to reduce "Keynesian unemployment". And increased unemployment benefits - insofar as they are a fiscal expansion - might do this. When "columnist Krugman" says that "enhanced [unemployment insurance] actually creates jobs when the economy is depressed", the emphasis must be upon the last five words.
Indeed, incentivizing people to find work when it is not (so much) available might be worse than pointless. Cutting unemployment benefits might incentivize people to turn to crime rather than legitimate work.
So, it could be that "columnist Krugman" and "textbook Krugman" are both right, but they are describing different states of the world - and different facts require different models...

Thursday, January 02, 2014

'Tribalism, Biology, and Macroeconomics'

Paul Krugman:

Tribalism, Biology, and Macroeconomics: ...Pew has a new report about changing views on evolution. The big takeaway is that a plurality of self-identified Republicans now believe that no evolution whatsoever has taken place since the day of creation... The move is big: an 11-point decline since 2009. ... Democrats are slightly more likely to believe in evolution than they were four years ago.
So what happened after 2009 that might be driving Republican views? The answer is obvious, of course: the election of a Democratic president
Wait — is the theory of evolution somehow related to Obama administration policy? Not that I’m aware of... The point, instead, is that Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists. For some time now it has been impossible to be a good Republicans while believing in the reality of climate change; now it’s impossible to be a good Republican while believing in evolution.
And of course the same thing is happening in economics. As recently as 2004, the Economic Report of the President (pdf) of a Republican administration could espouse a strongly Keynesian view..., the report — presumably written by Greg Mankiw — used the “s-word”, calling for “short-term stimulus”.
Given that intellectual framework, the reemergence of a 30s-type economic situation ... should have made many Republicans more Keynesian than before. Instead, at just the moment that demand-side economics became obviously critical, we saw Republicans — the rank and file, of course, but economists as well — declare their fealty to various forms of supply-side economics, whether Austrian or Lafferian or both. ...
And look, this has to be about tribalism. All the evidence ... has pointed in a Keynesian direction; but Keynes-hatred (and hatred of other economists whose names begin with K) has become a tribal marker, part of what you have to say to be a good Republican.

Before the Great Recession, macroeconomists seemed to be converging to a single intellectual framework. In Olivier Blanchard's famous words:

after the explosion (in both the positive and negative meaning of the word) of the field in the 1970s, there has been enormous progress and substantial convergence. For a while - too long a while - the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good.

The recession revealed that the "extremism, herding, and fashion" is much worse than many of us realized, and the rifts that have reemerged and are as strong as ever. What it didn't reveal is how to move beyond this problem. I thought evidence would matter more than it does, but somehow we seem to have lost the ability to distinguish between competing theoretical structures based upon econometric evidence (if we ever had it). The state of macro is not good, and the path to improvement is hard to see, but it must involve a shared agreement over the evidence based means through which the profession on both sides of these debates can embrace or reject particular theoretrical models.

Thursday, December 19, 2013

'More on the Illusion of Superiority'

Simon Wren-Lewis:

More on the illusion of superiority: For economists, and those interested in methodology Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.
My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.
But what about the Lucas critique? ...[continue]...

[In Maui, will post as I can...]

Tuesday, December 17, 2013

'Four Missing Ingredients in Macroeconomic Models'

Antonio Fatas:

Four missing ingredients in macroeconomic models: It is refreshing to see top academics questioning some of the assumptions that economists have been using in their models. Krugman, Brad DeLong and many others are opening a methodological debate about what constitute an acceptable economic model and how to validate its predictions. The role of micro foundations, the existence of a natural state towards the economy gravitates,... are all very interesting debates that tend to be ignored (or assumed away) in academic research.

I would like to go further and add a few items to their list... In random order:

1. The business cycle is not symmetric. ... Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Southern Europe is very close to its actual.

2. ...most academic research is produced around models where small and frequent shocks drive economic fluctuations, as opposed to large and infrequent events. The disconnect comes probably from the fact that it is so much easier to write models with small and frequent shocks than having to define a (stochastic?) process for large events. It gets even worse if one thinks that recessions are caused by the dynamics generated during expansions. Most economic models rely on unexpected events to generate crisis, and not on the internal dynamics that precede the crisis.

[A little bit of self-promotion: my paper with Ilian Mihov on the shape and length of recoveries presents some evidence in favor of these two hypothesis.]

3. There has to be more than price rigidity. ...

4. The notion that co-ordination across economic agents matters to explain the dynamics of business cycles receives very limited attention in academic research. ...

I am aware that they are plenty of papers that deal with these four issues, some of them published in the best academic journals. But most of these papers are not mainstream. Most economists are sympathetic to these assumption but avoid writing papers using them because they are afraid they will be told that their assumptions are ad-hoc and that the model does not have enough micro foundations (for the best criticism of this argument, read the latest post of Simon Wren-Lewis). Time for a change?

On the plucking model, see here and here.

Friday, December 13, 2013

Sticky Ideology

Paul Krugman:

Rudi Dornbusch and the Salvation of International Macroeconomics (Wonkish): ...Ken Rogoff had a very good paper on all this, in which he also says something about the state of affairs within the economics profession at the time:

The Chicago-Minnesota School maintained that sticky prices were nonsense and continued to advance this view for at least another fifteen years. It was the dominant view in academic macroeconomics. Certainly, there was a long period in which the assumption of sticky prices was a recipe for instant rejection at many leading journals. Despite the religious conviction among macroeconomic theorists that prices cannot be sticky, the Dornbusch model remained compelling to most practical international macroeconomists. This divergence of views led to a long rift between macroeconomics and much of mainstream international finance …

There are more than a few of us in my generation of international economists who still bear the scars of not being able to publish sticky-price papers during the years of new neoclassical repression.

Notice that this isn’t the evil Krugman talking; it’s the respectable Rogoff. Yet he too is in effect describing neoclassical macro as a sort of cult, actively suppressing alternative approaches. What he gets wrong is in the part I’ve elided with my “…”, in which he asserts that this is all behind us. As we saw when crisis struck, Chicago/Minnesota had in fact learned nothing and was pretty much unaware of the whole New Keynesian enterprise — and from what I hear about macro hiring, the suppression of ideas at odds with the cult remains in full force. ...

Wednesday, December 04, 2013

'Microfoundations': I Do Not Think That Word Means What You Think It Means

'Microfoundations': I Do Not Think That Word Means What You Think It Means

Brad DeLong responds to my column on macroeconomic models:

“Microfoundations”: I Do Not Think That Word Means What You Think It Means

The basic point is this:

...New Keynesian models with more or less arbitrary micro foundations are useful for rebutting claims that all is for the best macro economically in this best of all possible macroeconomic worlds. But models with micro foundations are not of use in understanding the real economy unless you have the micro foundations right. And if you have the micro foundations wrong, all you have done is impose restrictions on yourself that prevent you from accurately fitting reality.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? ...
After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Tuesday, December 03, 2013

One Model to Rule Them All?

Latest column:

Is There One Model to Rule Them All?: The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether? ...
The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?  - See more at: http://www.thefiscaltimes.com/Columns/2013/12/03/There-One-Economic-Model-Rule-Them-All#sthash.WPTndtm4.dpuf

Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:

Evans

This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Tuesday, November 05, 2013

Do People Have Rational Expectations?

New column:

Do People Have Rational Expectations?, by Mark Thoma

Not always, and economic models need to take this into account.

Saturday, October 12, 2013

'Nominal Wage Rigidity in Macro: An Example of Methodological Failure'

Simon Wren-Lewis:

Nominal wage rigidity in macro: an example of methodological failure: This post develops a point made by Bryan Caplan (HT MT). I have two stock complaints about the dominance of the microfoundations approach in macro. Neither imply that the microfoundations approach is ‘fundamentally flawed’ or should be abandoned: I still learn useful things from building DSGE models. My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by macroeconomists is so out of date. Behavioural economics just does not get a look in. A good and very important example comes from the reluctance of firms to cut nominal wages. There is overwhelming empirical evidence for this phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer Smith at Warwick). The behavioral reasons for this are explored in detail in this book by Truman Bewley, which Bryan Caplan discusses here. Both money illusion and the importance of workforce morale are now well accepted ideas in behavioral economics.

Yet debates among macroeconomists about whether and why wages are sticky go on. ...

While we can debate why this is at the level of general methodology, the importance of this particular example to current policy is huge. Many have argued that the failure of inflation to fall further in the recession is evidence that the output gap is not that large. As Paul Krugman in particular has repeatedly suggested, the reluctance of workers or firms to cut nominal wages may mean that inflation could be much more sticky at very low levels, so the current behavior of inflation is not inconsistent with a large output gap. ... Yet this is hardly a new discovery, so why is macro having to rediscover these basic empirical truths? ...

He goes on to give an example of why this matters (failure to incorporate downward nominal wage rigidity caused policymakers to underestimate the size of the output gap by a large margin, and that led to a suboptimal policy response).

Time for me to catch a plane ...

Tuesday, August 27, 2013

The Real Trouble With Economics: Sociology

Paul Krugman:

The Real Trouble With Economics: I’m a bit behind the curve in commenting on the Rosenberg-Curtain piece on economics as a non-science. What do I think of their thesis?

Well, I’m sorry to say that they’ve gotten it almost all wrong. Only “almost”: they’re entirely right that economics isn’t behaving like a science, and economists – macroeconomists, anyway – definitely aren’t behaving like scientists. But they misunderstand the nature of the failure, and for that matter the nature of such successes as we’re having....

It’s true that few economists predicted the onset of crisis. Once crisis struck, however, basic macroeconomic models did a very good job in key respects — in particular, they did much better than people who relied on their intuitive feelings. The intuitionists — remember, Alan Greenspan was supposed to be famously able to sense the economy’s pulse — insisted that budget deficits would send interest rates soaring, that the expansion of the Fed’s balance sheet would be inflationary, that fiscal austerity would strengthen economies through “confidence”. Meanwhile, wonks who relied on suitably interpreted IS-LM confidently declared that all this intuition, based on experiences in a different environment, would prove wrong — and they were right. From my point of view, these past 5 years have been a triumph for and vindication of economic modeling.

Oh, and it would be a real tragedy if the takeaway from recent events becomes that you should listen to impressive-looking guys with good tailors who stroke their chins and sound wise, and ignore the nerds; the nerds have been mostly right, while the Very Serious People have been wrong every step of the way.

Yet obviously something is deeply wrong with economics. While economists using textbook macro models got things mostly and impressively right, many famous economists refused to use those models — in fact, they made it clear in discussion that they didn’t understand points that had been worked out generations ago. Moreover, it’s hard to find any economists who changed their minds when their predictions, say of sharply higher inflation, turned out wrong. ...

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

I talked about the problem with the sociology of economics awhile back -- this is from a post in August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange's Robert Lucas Roundtable in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless, so we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.

But we have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. For example,... why policymakers didn't take the possibility of a major meltdown seriously. Why didn't they deliver forecasts conditional on a crisis occurring? Why didn't they ask this question of the model? Why did we only get forecasts conditional on no crisis? And also, why was the main factor that allowed the crisis to spread, the interconnectedness of financial markets, missed?

It was because policymakers couldn't and didn't take seriously the possibility that a crisis and meltdown could occur. And even if they had seriously considered the possibility of a meltdown, the models most people were using were not built to be informative on this question. It simply wasn't a question that was taken seriously by the mainstream.

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices.

And here I think that thought leaders such as Robert Lucas and others who openly ridiculed models they disagreed with have questions they should ask themselves (e.g. Mr Lucas saying "At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another", or more recently "These are kind of schlock economics"). When someone as notable and respected as Robert Lucas makes fun of an entire line of inquiry, it influences whole generations of economists away from asking certain types of questions, some of which turned out to be important. Why was it necessary for the major leaders in macroeconomics to shut down alternative lines of inquiry through ridicule and other means rather than simply citing evidence in support of their positions? What were they afraid of? The goal is to find the truth, not win fame and fortune by dominating the debate.

We need to take a close look at how the sociology of our profession led to an outcome where people were made to feel embarrassed for even asking certain types of questions. People will always be passionate in defense of their life's work, so it's not the rhetoric itself that is of concern, the problem comes when factors such as ideology or control of journals and other outlets for the dissemination of research stand in the way of promising alternative lines of inquiry.

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

I think the structure of journals, which concentrates power within the profession, also influence the sociology of the profession (and not in a good way).

Wednesday, August 14, 2013

'Never Channel the Ghosts of Dead Economists as a Substitute for Analysis'

 Nick Rowe checks in with David Laidler:

David Laidler goes meta on "What would Milton have said?": I tried to persuade David Laidler to join us in the econoblogosphere, especially given recent arguments about Milton Friedman. I have not yet succeeded, but David did say I could use these two paragraphs from his email:

However - re. the "what Milton would have said" debate  - When I was just getting started in the UK, I got thoroughly fed up with being told "what Maynard [Keynes] would have said" -- always apparently that the arguments of people like me were nonsense and therefore didn't have to be addressed in substance. I took a vow then never to channel the ghosts of dead economists as a substitute for analysis, and still regard it as binding!
MF was a big supporter of QE for Japan at the end of the '90s. I know that, because one of his clearest expressions of the view was in response to a question I put to him on a video link at a BofC conference. But so was Allan Meltzer at that time, and he is now  (a) virulently opposed to QE for the US and (b) on the record (New York Times, Nov. 4th 2010 "Milton Friedman vs. the Fed.")  as being sure that Milton would have agreed with him. In my personal view (a) demonstrates that even as wise an economist as Meltzer can sometimes give dangerous policy advice, and (b) shows that he knows how to deploy pure speculation to make a rhetorical splash when he does so. Who could possibly know what Milton would have said?  He isn't here.

David Laidler is probably the person best qualified to answer the question "What would Milton have said?", and that's his answer.

Speaking of Meltzer and substitutes for analysis, his last op-ed warns yet again about inflation. Mike Konczal responds:

Denialism and Bad Faith in Policy Arguments, by Mike Konczal: Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”
But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”
He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States. ...
The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.
There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. ...

Tuesday, August 13, 2013

Friedman's Legacy: The New Monetarist's View

I guess we should give the New Monetarists a chance to weigh in on Milton Friedman's legacy and influence (their name -- New Monetarists -- should give you some idea where this is headed...I cut the specific arguments short, but they can be found at the original post):

Friedman's Legacy, by Stephen Williamson, New Monetarist Economics: I'm not sure why, but there has been a lot of blogosphere writing on Milton Friedman recently... Randy Wright once convinced me that we should call ourselves New Monetarists, and we wrote a couple of papers (this one, and this one) in which we try to get a grip on what that means. As New Monetarists, we think we have something to say about Friedman.

We can find plenty of faults in Friedman's ideas, but those ideas - reflected in Friedman's theoretical and empirical work - are deeply embedded in much of what we do as economists in the 21st century. By modern standards, Friedman was a crude economic theorist, but he used the simple tools he had available to develop deep ideas that were later fleshed out in fully-articulated economic models. His empirical work was highly influential and serves as a key reference point for some sub-fields in economics. Some examples:

1. Permanent Income Theory...

2. The Friedman rule: Don't confuse this with the constant money growth rule, which comes from "The Role for Monetary Policy." The "Friedman rule" is the policy rule in the "Optimum Quantity of Money" essay. Basically, the nominal interest rate reflects a distortion. Eliminating that distortion requires reducing the nominal interest rate to zero in all states of the world, and that's what monetary policy should be aimed at doing... We can think of plenty of good reasons why optimal monetary policy could take us away from the Friedman rule in practice, but whenever someone makes an argument for some monetary policy rule, we have to first ask the question: why isn't that rule the Friedman rule? The Friedman rule is fundamental in monetary theory.

3. Monetary history: Friedman and Schwartz's "Monetary History of the United States" was monumental. ...

4. Policy rules: The rule that Friedman wanted central banks to follow was not the Friedman rule, but a constant-money-growth rule... Friedman was successful in getting the rule adopted by central banks in the 1970s and 1980s, but the rule was a practical failure, for reasons that are well-understood. But Friedman got macroeconomists and policymakers thinking about policy rules and how they work. Out of that thinking came ideas about central bank commitment, Taylor rules, inflation targeting, nominal GDP targeting, thresholds, etc., that form the basis for modern analysis of central bank policy.

5. Money and Inflation: ... Friedman played a key role in convincing economists and policymakers that central banks could, and should, control inflation. That seems as natural today as saying that rain falls from the sky, and that's part of Friedman's influence.

6. Narrow banking: I tend to think this was one of Friedman's bad ideas, but it's been very influential. Friedman advocated a 100% reserve requirement in "A Program for Monetary Stability." ...

6. Counterpoint to Keynesian economics: Some people seem to think that Friedman was actually a Keynesian at heart, but he sure got on Tobin's nerves. Criticism is important - it helps to prevent and root out lazy science. Old Keynesian economics was probably much better - e.g. there would have been no "neoclassical synthesis" - because of Friedman.

If anyone wants to argue that Friedman is now unimportant for modern economics, that's like saying Bob Dylan is unimportant for modern music. Today, Bob Dylan is quite willing to climb on a stage and perform with a world-class group of musicians - but it's truly pathetic. Nevertheless, Bob Dylan doesn't get booed off the stage today, because people recognize his importance. In the 1960s, he got people riled up, everyone paid attention, and the world is much different today than it would have been if he had not done the work he did.

Wednesday, August 07, 2013

(1) Numerical Methods, (2) Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman

Robert Waldmann:

...Another thing, what about numerical methods?  Macro was totally taken over by computer simulations. This liberated it (so that anything could happen) but also ruined the fun. When computers were new and scary, simulation based macro was scary and high status. When everyone can do it, setting up a model and simulating just doesn't demonstrate brains as effectively as finding one of the two or three special cases with closed form solutions and then presenting them. Also simulating unrealistic models is really pointless. People end up staring at the computer output and trying to think up stories which explain what went on in the computer. If one is reduced to that, one might as well look at real data. Models which can't be solved don't clarify thought. Since they also don't fit the data, they are really truly madly useless.

And one more:

Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman: Thoma Bait
I might as well be honest. I am posting this here rather than at rjwaldmann.blogspot.com , because I think it is the sort of thing to which Mark Thoma links and my standing among the bears is based entirely on the fact that Thoma occasionally links to me.
I think that Pigou, Samuelson, Solow and Friedman all assumed that the marginal propensity to consume out of wealth must, on average, be higher for nominal creditors than for nominal debtors. I think this is a gross error which shows how the representative consumer (invented by Samuelson) had done devastating damage already by 1960.
The topic is the Pigou effect versus the liquidity trap. ...

Guess I should send you there to read it.

Saturday, June 29, 2013

'DSGE Models and Their Use in Monetary Policy'

Mike Dotsey at the Philadelphia Fed:

DSGE Models and Their Use in Monetary Policy: The past 10 years or so have witnessed the development of a new class of models that are proving useful for monetary policy: dynamic stochastic general equilibrium (DSGE) models. The pioneering central bank, in terms of using these models in the formulation of monetary policy, is the Sveriges Riksbank, the central bank of Sweden. Following in the Riksbank’s footsteps, a number of other central banks have incorporated DSGE models into the monetary policy process, among them the European Central Bank, the Norge Bank (Norwegian central bank), and the Federal Reserve.
This article will discuss the major features of DSGE models and why these models are useful to monetary policymakers. It will indicate the general way in which they are used in conjunction with other tools commonly employed by monetary policymakers. ...

Saturday, June 22, 2013

'Debased Economics'

I need a quick post today, so I'll turn to the most natural blogger I can think of, Paul Krugman:

Debased Economics: John Boehner’s remarks on recent financial events have attracted a lot of unfavorable comment, and they should. ... I mean, he’s the Speaker of the House at a time when economic issues are paramount; shouldn’t he have basic familiarity with simple economic terms?
But the main thing is that he’s clinging to a story about monetary policy that has been refuted by experience about as thoroughly as any economic doctrine of the past century. Ever since the Fed began trying to respond to the financial crisis, we’ve had dire warnings about looming inflationary disaster. When the GOP took the House, it promptly called Bernanke in to lecture him about debasing the dollar. Yet inflation has stayed low, and the dollar has remained strong — just as Keynesians said would happen.
Yet there hasn’t been a hint of rethinking from leading Republicans; as far as anyone can tell, they still get their monetary ideas from Atlas Shrugged.
Oh, and this is another reminder to the “market monetarists”, who think that they can be good conservatives while advocating aggressive monetary expansion to fight a depressed economy: sorry, but you have no political home. In fact, not only aren’t you making any headway with the politicians, even mainstream conservative economists like Taylor and Feldstein are finding ways to advocate tighter money despite low inflation and high unemployment. And if reality hasn’t dented this dingbat orthodoxy yet, it never will.

I'll be offline the rest of today ...

Sunday, June 02, 2013

The Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere

Peter Dorman would like to know if he's wrong:

Why You Don’t See the Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere: Introductory textbooks are supposed to give you simplified versions of the models that professionals use in their own work. The blogosphere is a realm where people from a range of backgrounds discuss current issues often using simplified concepts so everyone can be on the same page.
But while the dominant framework used in introductory macro textbooks is aggregate supply—aggregate demand (AS-AD), it is almost never mentioned in the econ blogs. My guess is that anyone who tried to make an argument about current macropolicy using an AS-AD diagram would just invite snickers. This is not true on the micro side, where it’s perfectly normal to make an argument with a standard issue, partial equilibrium supply and demand diagram. What’s going on here?
I’ve been writing the part of my textbook where I describe what happened in macro during the period from the mid 70s to the mid 00s, and part of the story is the rise of textbook AS-AD. Here’s the line I take:
The dominant macro model, now crystallized in DSGE, is much too complex for intro students. It is based on intertemporal optimization and general equilibrium theory. There is no possible way to explain it to students in their first exposure to economics. But the mainstream has rejected the old income-expenditure models that graced intro texts in the 1970s and were, in skeleton form, the basis for the forecasting models used back in those days. So what to do?
The solution has been to use AS-AD as a placeholder. It allows instructors to talk about both prices and quantities in a rough market context. By putting Y on one axis and P on another, you can locate any macroeconomic outcome in the upper-right quadrant. It gets students “thinking like economists”.
Unfortunately the model is unsound. If you dig into it you find contradictions that can’t be papered over. One example is that the AS curve depends on the idea that input prices for firms systematically lag output prices, but do you really want to argue the theoretical and empirical case for this? Or try the AD assumption that, even as the price level and real output in the economy go up or down, the money supply remains fixed.
That’s why AS-AD is simply a placeholder. It has no intrinsic value as an economic model. No one uses it for policy purposes. It can’t be found in the econ blogs. It’s not a stripped down version of DSGE. Its only role is to occupy student brain cells until the real work of macroeconomic instruction can begin in a more advanced course.
If I’m wrong I’d like to know before I cut off all lines of retreat.

This won't fully answer the question (many DSGE adherents deny the existence of something called an AD curve), but here are a few counterexamples. One from today (here), and two from the past (via Tim Duy here and here).

Update: Paul Krugman comments here.

Wednesday, May 29, 2013

'DSGE + Financial Frictions = Macro that Works?'

This is a brief follow-up to this post from Noah Smith (see this post for the abstract to the Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide paper he discusses):

DSGE + financial frictions = macro that works?: In my last post, I wrote:

So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.

One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened...

This would seem to be a problem.

But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. ... Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.

This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.

The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):

I don't know about you, but to me that looks pretty darn good!
I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful." ...
However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?
It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough.
Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.
So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?

My take on why these models weren't used is a bit different.

My argument all along has been that we had the tools and models to explain what happened, but we didn't understand that this particular combination of models -- standard DSGE augmented by financial frictions -- was the important model to use. As I'll note below, part of the reason was empirical -- the evidenced did matter (though it was not interpreted correctly) -- but the bigger problem was that our arrogance caused us to overlook the important questions.

There are many, many "modules" we can plug into a model to make it do various things. Need to propagate a shock, i.e. make it persist over time? Toss in an adjustment cost of some sort (there are other ways to do this as well). Do you need changes in monetary policy to affect real output? Insert a price, wage, or information friction. And so on.

Unfortunately, adding every possible complication to make one grand model that explains everything is way too hard and complex. That's not possible. Instead, depending upon the questions we ask, we put these pieces together in particular ways to isolate the important relationships, and ignore the more trivial ones. This is the art of model building, to isolate what is important and provide insight into the question of interest.

We could have put the model described above together before the crisis, all of the pieces were there, and some people did things along these lines. But this was not the model most people used. Why? Because we didn't think the question was important. We didn't think that financial frictions were an important feature of modern business cycles because technology and deregulation had mostly solved this problem. If the banking system couldn't collapse, why build and emphasize models that say it will? (The empirical evidence for the financial frictions channel was a bit wobbly, and that was also part of the reason these models were not emphasized. But that evidence was based upon normal times, not deep recessions, and it didn't tell us as much as we thought about the usefulness of models that incorporate financial frictions.)

Ex-post, it's easy to look back and say aha -- this was the model that would have worked. Ex-ante, the problem is much harder. Will the next big recession be driven by a financial collapse? If so, then a model like this might be useful. But what if the shock comes from some other source? Is that shock in the model? When the time comes, will we be asking the right questions, and hence building models that can help to answer them, or will we be focused on the wrong thing -- fighting the last war? We have the tools and techniques to build all sorts of models, but they won't do us much good if we aren't asking the right questions.

How do we do that? We must have a strong sense of history, I think, at a minimum be able to look back and understand how various economic downturns happened and be sure those "modules" are in the baseline model. And we also need to have the humility to understand that we probably haven't progressed so much that it (e.g. a financial collapse) can't happen again. History alone is not enough, of course, new things can always happen -- things where history provides little guidance -- but we should at least incorporate things we know can be problematic.

It wasn't our tools and techniques that failed us prior to the Great Recession. It was our arrogance, our belief that we had solved the problem of financial meltdowns through financial innovation, deregulation, and the like that closed our eyes to the important questions we should have been asking. We are asking them now, but is that enough? What else should we be asking?

'Inflation in the Great Recession and New Keynesian Models'

DSGE models are "surprisingly accurate":

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide: It has been argued that existing DSGE models cannot properly account for the evolution of key macroeconomic variables during and following the recent Great Recession, and that models in which inflation depends on economic slack cannot explain the recent muted behavior of inflation, given the sharp drop in output that occurred in 2008-09. In this paper, we use a standard DSGE model available prior to the recent crisis and estimated with data up to the third quarter of 2008 to explain the behavior of key macroeconomic variables since the crisis. We show that as soon as the financial stress jumped in the fourth quarter of 2008, the model successfully predicts a sharp contraction in economic activity along with a modest and more protracted decline in inflation. The model does so even though inflation remains very dependent on the evolution of both economic activity and monetary policy. We conclude that while the model considered does not capture all short-term fluctuations in key macroeconomic variables, it has proven surprisingly accurate during the recent crisis and the subsequent recovery. [pdf]

Saturday, May 25, 2013

'The Hangover Theory'

Robert Waldmann's comments on the response to Michael Kinsley remind me of this old article from Paul Krugman (I've posted this before, but it seems like a good time to post it again -- it was written in 1998 and it foreshadows/debunks many of the bad arguments used to justify austerity, etc.):

The Hangover Theory: A few weeks ago, a journalist devoted a substantial part of a profile of yours truly to my failure to pay due attention to the "Austrian theory" of the business cycle--a theory that I regard as being about as worthy of serious study as the phlogiston theory of fire. Oh well. But the incident set me thinking--not so much about that particular theory as about the general worldview behind it. Call it the overinvestment theory of recessions, or "liquidationism," or just call it the "hangover theory." It is the idea that slumps are the price we pay for booms, that the suffering the economy experiences during a recession is a necessary punishment for the excesses of the previous expansion.
The hangover theory is perversely seductive--not because it offers an easy way out, but because it doesn't. It turns the wiggles on our charts into a morality play, a tale of hubris and downfall. And it offers adherents the special pleasure of dispensing painful advice with a clear conscience, secure in the belief that they are not heartless but merely practicing tough love.
Powerful as these seductions may be, they must be resisted--for the hangover theory is disastrously wrongheaded. Recessions are not necessary consequences of booms. They can and should be fought, not with austerity but with liberality--with policies that encourage people to spend more, not less. Nor is this merely an academic argument: The hangover theory can do real harm. Liquidationist views played an important role in the spread of the Great Depression--with Austrian theorists such as Friedrich von Hayek and Joseph Schumpeter strenuously arguing, in the very depths of that depression, against any attempt to restore "sham" prosperity by expanding credit and the money supply. And these same views are doing their bit to inhibit recovery in the world's depressed economies at this very moment.
The many variants of the hangover theory all go something like this: In the beginning, an investment boom gets out of hand. Maybe excessive money creation or reckless bank lending drives it, maybe it is simply a matter of irrational exuberance on the part of entrepreneurs. Whatever the reason, all that investment leads to the creation of too much capacity--of factories that cannot find markets, of office buildings that cannot find tenants. Since construction projects take time to complete, however, the boom can proceed for a while before its unsoundness becomes apparent. Eventually, however, reality strikes--investors go bust and investment spending collapses. The result is a slump whose depth is in proportion to the previous excesses. Moreover, that slump is part of the necessary healing process: The excess capacity gets worked off, prices and wages fall from their excessive boom levels, and only then is the economy ready to recover.
Except for that last bit about the virtues of recessions, this is not a bad story about investment cycles. Anyone who has watched the ups and downs of, say, Boston's real estate market over the past 20 years can tell you that episodes in which overoptimism and overbuilding are followed by a bleary-eyed morning after are very much a part of real life. But let's ask a seemingly silly question: Why should the ups and downs of investment demand lead to ups and downs in the economy as a whole? Don't say that it's obvious--although investment cycles clearly are associated with economywide recessions and recoveries in practice, a theory is supposed to explain observed correlations, not just assume them. And in fact the key to the Keynesian revolution in economic thought--a revolution that made hangover theory in general and Austrian theory in particular as obsolete as epicycles--was John Maynard Keynes' realization that the crucial question was not why investment demand sometimes declines, but why such declines cause the whole economy to slump.
Here's the problem: As a matter of simple arithmetic, total spending in the economy is necessarily equal to total income (every sale is also a purchase, and vice versa). So if people decide to spend less on investment goods, doesn't that mean that they must be deciding to spend more on consumption goods--implying that an investment slump should always be accompanied by a corresponding consumption boom? And if so why should there be a rise in unemployment?
Most modern hangover theorists probably don't even realize this is a problem for their story. Nor did those supposedly deep Austrian theorists answer the riddle. The best that von Hayek or Schumpeter could come up with was the vague suggestion that unemployment was a frictional problem created as the economy transferred workers from a bloated investment goods sector back to the production of consumer goods. (Hence their opposition to any attempt to increase demand: This would leave "part of the work of depression undone," since mass unemployment was part of the process of "adapting the structure of production.") But in that case, why doesn't the investment boom--which presumably requires a transfer of workers in the opposite direction--also generate mass unemployment? And anyway, this story bears little resemblance to what actually happens in a recession, when every industry--not just the investment sector--normally contracts.
As is so often the case in economics (or for that matter in any intellectual endeavor), the explanation of how recessions can happen, though arrived at only after an epic intellectual journey, turns out to be extremely simple. A recession happens when, for whatever reason, a large part of the private sector tries to increase its cash reserves at the same time. Yet, for all its simplicity, the insight that a slump is about an excess demand for money makes nonsense of the whole hangover theory. For if the problem is that collectively people want to hold more money than there is in circulation, why not simply increase the supply of money? You may tell me that it's not that simple, that during the previous boom businessmen made bad investments and banks made bad loans. Well, fine. Junk the bad investments and write off the bad loans. Why should this require that perfectly good productive capacity be left idle?
The hangover theory, then, turns out to be intellectually incoherent; nobody has managed to explain why bad investments in the past require the unemployment of good workers in the present. Yet the theory has powerful emotional appeal. Usually that appeal is strongest for conservatives, who can't stand the thought that positive action by governments (let alone--horrors!--printing money) can ever be a good idea. Some libertarians extol the Austrian theory, not because they have really thought that theory through, but because they feel the need for some prestigious alternative to the perceived statist implications of Keynesianism. And some people probably are attracted to Austrianism because they imagine that it devalues the intellectual pretensions of economics professors. But moderates and liberals are not immune to the theory's seductive charms--especially when it gives them a chance to lecture others on their failings.
Few Western commentators have resisted the temptation to turn Asia's economic woes into an occasion for moralizing on the region's past sins. How many articles have you read blaming Japan's current malaise on the excesses of the "bubble economy" of the 1980s--even though that bubble burst almost a decade ago? How many editorials have you seen warning that credit expansion in Korea or Malaysia is a terrible idea, because after all it was excessive credit expansion that created the problem in the first place?
And the Asians--the Japanese in particular--take such strictures seriously. One often hears that Japan is adrift because its politicians refuse to make hard choices, to take on vested interests. The truth is that the Japanese have been remarkably willing to make hard choices, such as raising taxes sharply in 1997. Indeed, they are in trouble partly because they insist on making hard choices, when what the economy really needs is to take the easy way out. The Great Depression happened largely because policy-makers imagined that austerity was the way to fight a recession; the not-so-great depression that has enveloped much of Asia has been worsened by the same instinct. Keynes had it right: Often, if not always, "it is ideas, not vested interests, that are dangerous for good or evil."

Thursday, May 16, 2013

New Research in Economics: Robust Stability of Monetary Policy Rules under Adaptive Learning

I have had several responses to my offer to post write-ups of new research that I'll be posting over the next few days (thanks!), but I thought I'd start with a forthcoming paper from a former graduate student here at the University of Oregon, Eric Guass:

Robust Stability of Monetary Policy Rules under Adaptive Learning, by Eric Gaus, forthcoming, Southern Economics Journal: Adaptive learning has been used to assess the viability a variety of monetary policy rules. If agents using simple econometric forecasts "learn" the rational expectations solution of a theoretical model, then researchers conclude the monetary policy rule is a viable alternative. For example, Duffy and Xiao (2007) find that if monetary policy makers minimize a loss function of inflation, interest rates, and the output gap, then agents in a simple three equation model of the macroeconomy learn the rational expectations solution. On the other hand, Evans and Honkapohja (2009) demonstrates that this may not always be the case. The key difference between the two papers is an assumption over what information the agents of the model have access to. Duffy and Xiao (2007) assume that monetary policy makers have access to contemporaneous variables, that is, they adjust interest rates to current inflation and output. Evans and Honkapohja (2009) instead assume that agents only can form expectations of contemporaneous variables. Another difference between these two papers is that in Duffy and Xiao (2007) agents use all the past data they have access to, whereas in Evans and Honkapohja (2009) agents use a fixed window of data.
This paper examines several different monetary policy rules under a learning mechanism that changes how much data agents are using. It turns out that as long as the monetary policy makers are able to see contemporaneous endogenous variables (output and inflation) then the Duffy and Xiao (2007) results hold. However, if agents and policy makers use expectations of current variables then many of the policy rules are not "robustly stable" in the terminology of Evans and Honkapohja (2009).
A final result in the paper is that the switching learning mechanism can create unpredictable temporary deviations from rational expectations. This is a rather starting result since the source of the deviations is completely endogenous. The deviations appear in a model where there are no structural breaks or multiple equilibria or even an intention of generating such deviations. This result suggests that policymakers should be concerned with the potential that expectations, and expectations alone, can create exotic behavior that temporarily strays from the REE.

Wednesday, May 08, 2013

What is Wrong (and Right) in Economics?

Dani Rodrik:

What is wrong (and right) in economics?, by Dani Rodrik: The World Economics Association recently interviewed me on the state of economics, inquiring about my views on pluralism in the profession. You can find the result on the WEA's newsletter here (the interview starts on page 9). I reproduce it below. ...

Tuesday, May 07, 2013

Seven Myths about Keynesian Economics

The recent blow-up surrounding Niall Ferguson's comments on Keynes' concern for long-run issues prompted my latest column:

Seven Myths about Keynesian Economics

The claim that Keynesians are indifferent to the long-run is one of many myths about Keynesian economics.

Saturday, May 04, 2013

'Keynes, Keynesians, the Long Run, and Fiscal Policy'

Paul Krugman on how to tell when someone is "pretending to be an authority on economics":

Keynes, Keynesians, the Long Run, and Fiscal Policy: One dead giveaway that someone pretending to be an authority on economics is in fact faking it is misuse of the famous Keynes line about the long run. Here’s the actual quote:

But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.

As I’ve written before, Keynes’s point here is that economic models are incomplete, suspect, and not much use if they can’t explain what happens year to year, but can only tell you where things will supposedly end up after a lot of time has passed. It’s an appeal for better analysis, not for ignoring the future; and anyone who tries to make it into some kind of moral indictment of Keynesian thought has forfeited any right to be taken seriously. ...

I thought the target of these remarks had forfeited any right to be taken seriously long ago (except, of course and unfortunately, by Very Serious People). [Krugman goes on to tackle several other topics.]

'Microfounded Social Welfare Functions'

This is very wonkish, but it's also very important. The issue is whether DSGE models used for policy analysis can properly capture the relative costs of deviations of inflation and output from target. Simon Wren-Lewis argues -- and I very much agree -- that the standard models are not a very good guide to policy because they vastly overstate the cost of inflation relative to the cost of output (and employment) fluctuations (see the original for the full argument and links to source material):

Microfounded Social Welfare Functions, by Simon Wren-Lewis: More on Beauty and Truth for economists

... Woodford’s derivation of social welfare functions from representative agent’s utility ... can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.

Where does this result come from? ... Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations... Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.

Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.

So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.

Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.

Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero...

“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”

As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.


[1] Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.

Friday, May 03, 2013

Romer and Stiglitz on the State of Macroeconomics

Two essays on the state of macroeconomics:

First, David Romer argues our recent troubles are an extreme version of an ongoing problem:

... As I will describe, my reading of the evidence is that the events of the past few years are not an aberration, but just the most extreme manifestation of a broader pattern. And the relatively modest changes of the type discussed at the conference, and that in some cases policymakers are putting into place, are helpful but unlikely to be enough to prevent future financial shocks from inflicting large economic harms.
Thus, I believe we should be asking whether there are deeper reforms that might have a large effect on the size of the shocks emanating from the financial sector, or on the ability of the economy to withstand those shocks. But there has been relatively little serious consideration of ideas for such reforms, not just at this conference but in the broader academic and policy communities. ...

He goes on to describe some changes he'd like to see, for example:

I was disappointed to see little consideration of much larger financial reforms. Let me give four examples of possible types of larger reforms:

  • There were occasional mentions of very large capital requirements. For example, Allan Meltzer noted that at one time 25 percent capital for was common for banks. Should we be moving to such a system?
  • Amir Sufi and Adair Turner talked about the features of debt contracts that make them inherently prone to instability. Should we be working aggressively to promote more indexation of debt contracts, more equity-like contracts, and so on?
  • We can see the costs that the modern financial system has imposed on the real economy. It is not immediately clear that the benefits of the financial innovations of recent decades have been on a scale that warrants those costs. Thus, might a much simpler, 1960s- or 1970s-style financial system be better than what we have now?
  • The fact that shocks emanating from the financial system sometimes impose large costs on the rest of the economy implies that there are negative externalities to some types of financial activities or financial structures, which suggests the possibility of Pigovian taxes.

So, should there be substantial taxes on certain aspects of the financial system? If so, what should be taxed – debt, leverage, size, other indicators of systemic risk, a combination, or something else altogether?

Larger-scale solutions on the macroeconomic side ...

After a long discussion, he concludes with:

After five years of catastrophic macroeconomic performance, “first steps and early lessons” – to quote the conference title – is not what we should be aiming for. Rather, we should be looking for solutions to the ongoing current crisis and strong measures to minimize the chances of anything similar happening again. I worry that the reforms we are focusing on are too small to do that, and that what is needed is a more fundamental rethinking of the design of our financial system and of our frameworks for macroeconomic policy.

Second, Joe Stiglitz:

In analyzing the most recent financial crisis, we can benefit somewhat from the misfortune of recent decades. The approximately 100 crises that have occurred during the last 30 years—as liberalization policies became dominant—have given us a wealth of experience and mountains of data. If we look over a 150 year period, we have an even richer data set.
With a century and half of clear, detailed information on crisis after crisis, the burning question is not How did this happen? but How did we ignore that long history, and think that we had solved the problems with the business cycle Believing that we had made big economic fluctuations a thing of the past took a remarkable amount of hubris....

In his lengthy essay, he goes on to discuss:

Markets are not stable, efficient, or self-correcting

  • The models that focused on exogenous shocks simply misled us—the majority of the really big shocks come from within the economy.
  • Economies are not self-correcting.

More than deleveraging, more than a balance sheet crisis: the need for structural transformation

  • The fact that things have often gone badly in the aftermath of a financial crisis doesn’t mean they must go badly.

Reforms that are, at best, half-way measures

  • The reforms undertaken so far have only tinkered at the edges.
  • The crisis has brought home the importance of financial regulation for macroeconomic stability.

Deficiencies in reforms and in modeling

  • The importance of credit
    • A focus on the provision of credit has neither been at the center of policy discourse nor of the standard macro-models.
    • There is also a lack of understanding of different kinds of finance.
  • Stability
  • Distribution

Policy Frameworks

  • Flawed models not only lead to flawed policies, but also to flawed policy frameworks.
  • Should monetary policy focus just on short term interest rates?
  • Price versus quantitative interventions
  • Tinbergen

Stiglitz ends with:

Take this chance to revolutionize flawed models
It should be clear that we could have done much more to prevent this crisis and to mitigate its effects. It should be clear too that we can do much more to prevent the next one. Still, through this conference and others like it, we are at least beginning to clearly identify the really big market failures, the big macroeconomic externalities, and the best policy interventions for achieving high growth, greater stability, and a better distribution of income.
To succeed, we must constantly remind ourselves that markets on their own are not going to solve these problems, and neither will a single intervention like short-term interest rates. Those facts have been proven time and again over the last century and a half.
And as daunting as the economic problems we now face are, acknowledging this will allow us to take advantage of the one big opportunity this period of economic trauma has afforded: namely, the chance to revolutionize our flawed models, and perhaps even exit from an interminable cycle of crises.

Tuesday, April 23, 2013

A New and Improved Macroeconomics

New column:

A New and Improved Macroeconomics, by Mark Thoma: Macroeconomics has not fared well in recent years. The failure of standard macroeconomic models during the financial crisis provided the first big blow to the profession, and the recent discovery of the errors and questionable assumptions in the work of Reinhart and Rogoff further undermined the faith that people have in our models and empirical methods.
What will it take for the macroeconomics profession to do better? ...

Wednesday, April 17, 2013

Empirical Methods and Progress in Macroeconomics

The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn't about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics.

Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit.

There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship. If we could do repeated experiments or compare data across countries (or other jurisdictions) without worrying about the “all else equal assumption” we’d could perhaps sort this out. It would be like repeated experiments. But, unfortunately, there are too many institutional differences and common shocks across countries to reliably treat each country as an independent, all else equal experiment. Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data.

Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions).

There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built. In this regard, when I was a new assistant professor Milton Friedman presented some work at a conference that impressed me quite a bit. He resurrected a theoretical paper he had written 25 years earlier (it was his plucking model of aggregate fluctuations), and tested it against the data that had accumulated in the time since he had published his work. It’s not really fair to test a theory against historical macroeconomic data, we all know what the data say and it would be foolish to build a model that is inconsistent with the historical data it was built to explain – of course the model will fit the data, who would be impressed by that? But a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful (Friedman’s model passed the test using only the newer data).

As a young time-series econometrician struggling with data/degrees of freedom issues I found this encouraging. So what if in 1986 – when I finished graduate school – there were only 28 quarterly observations for macro variables (112 total observations, reliable data on money, which I almost always needed, doesn’t begin until 1959). By, say, the end of 2012 there would be almost double that amount (216 versus 112!!!). Asymptotic (plim-type) results here we come! (Switching to monthly data doesn’t help much since it’s the span of the data – the distance between the beginning and the end of the sample – rather than the frequency the data are sampled that determines many of the “large-sample results”).

By today, I thought, I would have almost double the data I had back then and that would improve the precision of tests quite a bit. I could also do what Friedman did, take really important older papers that give us results “everyone knows” and see if they hold up when tested against newer data.

It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s, and because of this structural break today 1984 is a common starting point for empirical investigations (start dates can be anywhere in the 79-84 range though later dates are more common). Data before this time-period are discarded.

So, here we are 25 years or so later and macroeconomists don’t have any more data at our disposal than we did when I was in graduate school. And if the structure of the economy keeps changing – as it will – the same will probably be true 25 years from now. We will either have to model the structural change explicitly (which isn’t easy, and attempts to model structural beaks often induce as much uncertainty as clarity), or continually discard historical data as time goes on (maybe big data, digital technology, theoretical advances, etc. will help?).

The point is that for a variety of reasons – the lack of experimental data, small data sets, and important structural change foremost among them – empirical macroeconomics is not able to definitively say which competing model of the economy best explains the data. There are some questions we’ve been able to address successfully with empirical methods, e.g., there has been a big change in views about the effectiveness of monetary policy over the last few decades driven by empirical work. But for the most part empirical macro has not been able to settle important policy questions. The debate over government spending multipliers is a good example. Theoretically the multiplier can take a range of values from small to large, and even though most theoretical models in use today say that the multiplier is large in deep recessions, ultimately this is an empirical issue. I think the preponderance of the empirical evidence shows that multipliers are, in fact, relatively large in deep recessions – but you can find whatever result you like and none of the results are sufficiently definitive to make this a fully settled issue.

I used to think that the accumulation of data along with ever improving empirical techniques would eventually allow us to answer important theoretical and policy questions. I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results.

Saturday, March 30, 2013

'The Price Is Wrong'

As noted below, it is a slow day, but this is well worth reading (there's quite a bit more in the original post):

The Price Is Wrong, by Paul Krugman: It’s a slow morning on the economic news front, as we wait for various euro shoes to drop, so I thought I’d share a meditation I’ve been having on the diagnosis and misdiagnosis of the Lesser Depression. ...
So, start with our big problem, which is mass unemployment. Basic supply and demand analysis says that ... prices are supposed to rise or fall to clear markets. So what’s with this apparent massive and persistent excess supply of labor? In general, market disequilibrium is a sign of prices out of whack... The big divide comes over the question of which price is wrong.
As I see it, the whole structural/classical/Austrian/supply-side/whatever side of this debate basically believes that the problem lies in the labor market. ... For some reason, they would argue, wages are too high... Some of them accept the notion that it’s because of downward nominal wage rigidity; more, I think, believe that workers are being encouraged to hold out for unsustainable wages by moocher-friendly programs like food stamps, unemployment benefits, disability insurance, and whatever.
As regular readers know, I find this prima facie absurd — it’s essentially the claim that soup kitchens caused the Great Depression. ...
So what’s the alternative view? It’s basically the notion that the interest rate is wrong — that given the overhang of debt and other factors depressing private demand, real interest rates would have to be deeply negative to match desired saving with desired investment at full employment. And real rates can’t go that negative because expected inflation is low and nominal rates can’t go below zero: we’re in a liquidity trap. ..
There are strong policy implications of these two views. If you think the problem is that wages are too high, your solution is that we need to meaner to workers — cut off their unemployment insurance, make them hungry by cutting off food stamps, so they have no alternative to do whatever it takes to get jobs, and wages fall. If you think the problem is the zero lower bound on interest rates, you think that this kind of solution wouldn’t just be cruel, it would make the economy worse, both because cutting workers’ incomes would reduce demand and because deflation would increase the burden of debt.
What my side of the debate would call for, instead, is a reduction in the real interest rate, if possible, by raising expected inflation; and failing that, more government spending to increase demand and put idle resources to work. ...
So yes, the price is wrong — but it’s a terrible, disastrous mistake to focus on the wrong wrong price.

Why should workers bear the burden of a recession they had nothing to do with causing? We should do our best to protect vulnerable workers and their families, and if it comes at the expense of those who were responsible for the boom and bust, I can live with that (and no, the cause wasn't poor people trying to buy houses -- people on the right who are afraid they will be asked to pay for their poor choices, or who want to pursue an anti-government, do not help the unfortunate with my hard-earned investment income agenda have tried to make this claim, and they are still at it, but it is "prima facie absurd").

Friday, March 08, 2013

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates

Watching John Williams give this paper:

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates, by Eric T. Swanson and John C. Williams, Federal Reserve Bank of San Francisco, January 2013: Abstract The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to many macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, standard macroeconomic theory also implies that private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for the economy, and it is unclear to what extent those yields have been constrained. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by estimating the time-varying high-frequency sensitivity of those interest rates to macroeconomic announcements relative to a benchmark period in which the zero bound was not a concern. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.

Tuesday, March 05, 2013

'Are Sticky Prices Costly? Evidence From The Stock Market'

There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Tuesday, February 19, 2013

Big Data?

Paul Krugman:

Data, Stimulus, and Human Nature, by Paul Krugman: David Brooks writes about the limitations of Big Data, and makes some good points. But he goes astray, I think, when he touches on a subject near and dear to my own data-driven heart:

For example, we’ve had huge debates over the best economic stimulus, with mountains of data, and as far as I know not a single major player in this debate has been persuaded by data to switch sides.

Actually, he’s not quite right there, as I’ll explain in a minute. But it’s certainly true that neither stimulus advocates nor hard-line stimulus opponents have changed their positions. The question is, does this say something about the limits of data — or is it just a commentary on human nature, especially in a highly politicized environment?

For the truth is that there were some clear and very different predictions from each side of the debate... On these predictions, the data have spoken clearly; the problem is that people don’t want to hear..., and the fact that they don’t happen has nothing to do with the limitations of data. ...

That said, if you look at players in the macro debate who would not face huge personal and/or political penalties for admitting that they were wrong, you actually do see data having a considerable impact. Most notably, the IMF has responded to the actual experience of austerity by conceding that it was probably underestimating fiscal multipliers by a factor of about 3.

So yes, it has been disappointing to see so many people sticking to their positions on fiscal policy despite overwhelming evidence that those positions are wrong. But the fault lies not in our data, but in ourselves.

I'll just add that when it comes to the debate over the multiplier and the macroeconomic data used to try to settle the question, the term "Big Data" doesn't really apply. If we actually had "Big Data," we might be able to get somewhere but as it stands -- with so little data and so few relevant historical episodes with similar policies -- precise answers are difficult to ascertain. And it's even worse than that. Let me point to something David Card said in an interview I posted yesterday:

I think many people are concerned that much of the research they see is biased and has a specific agenda in mind. Some of that concern arises because of the open-ended nature of economic research. To get results, people often have to make assumptions or tweak the data a little bit here or there, and if somebody has an agenda, they can inevitably push the results in one direction or another. Given that, I think that people have a legitimate concern about researchers who are essentially conducting advocacy work.

If we had the "Big Data" we need to answer these questions, this would be less of a problem. But with quarterly data from 1960 (when money data starts, you can go back to 1947 otherwise), or since 1982 (to avoid big structural changes and changes in Fed operating procedures), or even monthly data (if you don't need variables like GDP), there isn't as much precision as needed to resolve these questions (50 years of quarterly data is only 200 observations). There is also a lot of freedom to steer the results in a particular direction and we have to rely upon the integrity of researchers to avoid pushing a particular agenda. Most play it straight up, the answers are however they come out, but there are enough voices with agendas -- particularly, though not excusively, from think tanks, etc. -- to cloud the issues and make it difficult for the public to separate the honest work from the agenda based, one-sided, sometimes dishonest presentations. And there are also the issues noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out "right," etc., because the data doesn't provide enough clarity to force them to give up their beliefs (in which they've invested considerable effort).

So I wish we had "Big Data," and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you can't always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).

Despite a few research attempts such as the billion price project, "Little Data" and all the problems that come with it is a better description of empirical macroeconomics.

Monday, February 11, 2013

Phelps on Rational Expectations

Ed Phelps does not like rational expectations:

Expecting the Unexpected: An Interview With Edmund Phelps, by Caroline Baum, Commentary, Bloomberg: ...I talked with [Edmund Phelps] ... about his views on rational expectations...
Q: So how did adaptive expectations morph into rational expectations?
A: The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. ...
Q: And what's the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. ... Roman Frydman has made his career uncovering the impossibility of rational expectations in several contexts. ...
When I was getting into economics in the 1950s, we understood there could be times when a craze would drive stock prices very high. Or the reverse... But now that way of thinking is regarded by the rational expectations advocates as unscientific.
By the early 2000s, Chicago and MIT were saying we've licked inflation and put an end to unhealthy fluctuations –- only the healthy “vibrations” in rational expectations models remained. Prices are scientifically determined, they said. Expectations are right and therefore can't cause any mischief.
At a celebration in Boston for Paul Samuelson in 2004 or so, I had to listen to Ben Bernanke and Oliver Blanchard ... crowing that they had conquered the business cycle of old by introducing predictability in monetary policy making, which made it possible for the public to stop generating baseless swings in their expectations and adopt rational expectations...
Q: And how has that worked out?
A: Not well! ...
[There's more in the full interview.]

Friday, January 25, 2013

'Misinterpreting the History of Macroeconomic Thought'

Simon Wren-Lewis argues that the "crisis view" of change in macroeconomic theory is too simple

Misinterpreting the history of macroeconomic thought, mainly macro: An attractive way to give a broad sweep over the history of macroeconomic ideas is to talk about a series of reactions to crises (see Matthew Klein and Noah Smith). However it is too simple, and misleads as a result. The Great Depression led to Keynesian economics. So far so good. The inflation of the 1970s led to ? Monetarism - well maybe in terms of a few brief policy experiments in the early 1980s, but Monetarist-Keynesian debates were going strong before the 1970s. The New Classical revolution? Well rational expectations can be helpful in adapting the Phillips curve to explain what happened in the 1970s, but I’m not sure that was the main reason why the idea was so rapidly adopted. The New Classical revolution was much more than rational expectations.

The attempt gets really off beam if we try and suggest that the rise of RBC models was a response to the inflation of the 1970s. I guess you could argue that the policy failures of the 1970s were an example of the Lucas critique, and that to avoid similar mistakes macroeconomists needed to develop microfounded models. But if explaining the last crisis really was the prime motivation, would you develop models in which there was no Phillips curve, and which made no attempt to explain the inflation of the 1970s (or indeed, the previous crisis - the Great Depression)?

What the ‘macroeconomic ideas develop as a response to crises’ story leaves out is the rest of economics, and ideology. The Keynesian revolution (by which I mean macroeconomics after the second world war) can be seen as a methodological revolution. Models were informed by theory, but their equations were built to explain the data. Time series econometrics played an essential role. However this appeared to be different from how other areas of the discipline worked. In these other areas of economics, explaining behavior in terms of optimization by individual agents was all important. This created a tension, and a major divide within economics as a whole. Macro appeared quite different from micro.

A particular manifestation of this was the constant question: where is the source of the market failure that gives rise to the business cycle. Most macroeconomists replied sticky prices, but this prompted the follow up question: why do rational firms or workers choose not to change their prices? The way most macroeconomists at the time chose to answer this was that expectations were slow to adjust. It was a disastrous choice, but I suspect one that had very little to do with the nature of Keynesian theory, and rather more to do with the analytical convenience of adaptive expectations. Anyhow, that is another story.

The New Classical revolution was in part a response to that tension. In methodological terms it was a counter revolution, trying to take macroeconomics away from the econometricians, and bring it back to something microeconomists could understand. Of course it could point to policy in the 1970s as justification, but I doubt that was the driving force. I also think it is difficult to fully understand the New Classical revolution, and the development of RBC models, without adding in some ideology. 

Does this have anything to tell us about how macroeconomics will respond to the Great Recession? I think it does. If you bought the ‘responding to the last crisis’ narrative, you would expect to see some sea change, akin to Keynesian economics or the New Classical revolution. I suspect you would be disappointed. While I see plenty of financial frictions being added to DSGE models, I do not see any significant body of macroeconomists wanting to ply their trade in a radically different way. If this crisis is going to generate a new revolution in macroeconomics, where are the revolutionaries? However, if you read the history of macro thought the way I do, then macro crises are neither necessary nor sufficient for revolutions in macro thought. Perhaps there was only one real revolution, and we have been adjusting to the tensions that created ever since.  

Let me follow up on the ideological point with an example. Prior to the New Classical revolution in the 1970s (which, contra some recent descriptions, is different from DSGE models), the people who do not believe that government intervention is bad had a problem. It was very clear in the data that there was a positive correlation between changes in the money supply and changes in employment and real income. Further, though this is harder to establish, the relationship appeared causal. Money causes income, and this allowed government to stabilize the economy.

The (neo)classical model, with its vertical AS curve, could not explain the positive money-income correlation in the data. In the typical classical formulation, so long as prices are perfectly flexible and all markets clear at all points in time, the economy is always in long-run equilibrium. Thus, in these models the prediction is a zero correlation between money and income. But it wasn't zero.

However, a very clever idea from Robert Lucas in the 1970s allowed this correlation to be explained without admitting government can do good, i.e. without admitting that government can stabilize the economy using monetary policy. This is the ideological part -- a way to explain the data without acknowledging a role for government at the same time. I can't say that Lucas approached the problem in this way, i.e. that he started out with the ideological goal of explaining the money-income correlation without allowing a role for government. Maybe it arose in a flash of brilliance completely unconnected to ideological concerns, But I find it hard to explain why this model came about in the form it did without ideology, and the view of government the New Classical model supported surely didn't hurt its acceptance at places like the University of Chicago (as it existed then).

Wednesday, January 09, 2013

'During Periods of Extreme Growth and Decline, Human Behavior is Not the Same

From an interview of MIT's Andrew Lo:

Q: Many people believe that the financial crisis revealed major shortcomings in the discipline of economics, and one of the goals of your book is to consider what economic theory tells us about the links between finance and the rest of the economy. Do you feel that economists understand enough about the nature of financial instability or liquidity crises?
A: I think that the financial crisis was an important wake-up call to all economists that we need to change the way we approach our discipline. While economics has made great strides in modeling liquidity risk, financial contagion, and market bubbles and crashes, we haven't done a very good job of integrating these models into broader macroeconomic policy tools. That's the focus of a lot of recent activity in macro and financial economics and the hope is that we'll be able to do better in the near future.
Q: Let me continue briefly on this thread. One topic has been particularly controversial concerns the efficient-market hypothesis (EMH). Burton Malkiel discusses the issue in his chapter in Rethinking the Financial Crisis, but I wanted to ask your opinion of this idea that EMH fed a hands-off regulatory approach that ignored concerns about faulty asset pricing.
A: There's no doubt that EMH and its macroeconomic cousin, Rational Expectations, played a significant role in how regulators approached their responsibilities. However, we should keep in mind that market efficiency isn't wrong; it's just incomplete. Market participants do behave rationally under normal economic conditions, hence the current regulatory framework does serve a useful purpose during these periods. But during periods of extreme growth and decline, human behavior is not the same, and much of economic theory and regulatory policy does not yet reflect this new perspective of "Adaptive Markets."

Monday, January 07, 2013

The Reason We Lose at Games: Implications for Financial Markets

Something to think about:

The reason we lose at games, EurekAlert: Writing in PNAS, a University of Manchester physicist has discovered that some games are simply impossible to fully learn, or too complex for the human mind to understand.
Dr Tobias Galla from The University of Manchester and Professor Doyne Farmer from Oxford University and the Santa Fe Institute, ran thousands of simulations of two-player games to see how human behavior affects their decision-making.
In simple games with a small number of moves, such as Noughts and Crosses the optimal strategy is easy to guess, and the game quickly becomes uninteresting.
However, when games became more complex and when there are a lot of moves, such as in chess, the board game Go or complex card games, the academics argue that players' actions become less rational and that it is hard to find optimal strategies.
This research could also have implications for the financial markets. Many economists base financial predictions of the stock market on equilibrium theory – assuming that traders are infinitely intelligent and rational.
This, the academics argue, is rarely the case and could lead to predictions of how markets react being wildly inaccurate.
Much of traditional game theory, the basis for strategic decision-making, is based on the equilibrium point – players or workers having a deep and perfect knowledge of what they are doing and of what their opponents are doing.
Dr Galla, from the School of Physics and Astronomy, said: "Equilibrium is not always the right thing you should look for in a game."
"In many situations, people do not play equilibrium strategies, instead what they do can look like random or chaotic for a variety of reasons, so it is not always appropriate to base predictions on the equilibrium model."
"With trading on the stock market, for example, you can have thousands of different stock to choose from, and people do not always behave rationally in these situations or they do not have sufficient information to act rationally. This can have a profound effect on how the markets react."
"It could be that we need to drop these conventional game theories and instead use new approaches to predict how people might behave."
Together with a Manchester-based PhD student the pair are looking to expand their study to multi-player games and to cases in which the game itself changes with time, which would be a closer analogy of how financial markets operate.
Preliminary results suggest that as the number of players increases, the chances that equilibrium is reached decrease. Thus for complicated games with many players, such as financial markets, equilibrium is even less likely to be the full story.

Paul Krugman: The Big Fail

Who should be blamed for the slow recovery?:

The Big Fail by Paul Krugman, Commentary, NY Times: It’s that time again: the annual meeting of the American Economic Association and affiliates... And this year, as in past meetings, there is one theme dominating discussion: the ongoing economic crisis.
This isn’t how things were supposed to be. If you had polled the economists attending this meeting three years ago, most of them would surely have predicted that by now we’d be talking about how the great slump ended, not why it still continues.
So what went wrong? The answer, mainly, is the triumph of bad ideas.
It’s tempting to argue that the economic failures of recent years prove that economists don’t have the answers. But the truth is ... standard economics offered good answers, but political leaders — and all too many economists — chose to forget or ignore what they should have known. ...
A smaller financial shock, like the dot-com bust at the end of the 1990s, can be met by cutting interest rates. But the crisis of 2008 was far bigger, and even cutting rates all the way to zero wasn’t nearly enough.
At that point governments needed to step in, spending to support their economies while the private sector regained its balance. And to some extent that did happen... Budget deficits rose, but this was actually a good thing, probably the most important reason we didn’t have a full replay of the Great Depression.
But it all went wrong in 2010. The crisis in Greece was taken, wrongly, as a sign that all governments had better slash spending and deficits right away. Austerity became the order of the day...
Of the papers presented at this meeting, probably the biggest flash came from one by Olivier Blanchard and Daniel Leigh of the International Monetary Fund. ... For what the paper concludes is not just that austerity has a depressing effect on weak economies, but that the adverse effect is much stronger than previously believed. The premature turn to austerity, it turns out, was a terrible mistake. ...
The really bad news is ... European leaders ... still insist that the answer is even more pain. ... And here in America, Republicans insist that they’ll use a confrontation over the debt ceiling ... to demand spending cuts that would drive us back into recession.
The truth is that we’ve just experienced a colossal failure of economic policy — and far too many of those responsible for that failure both retain power and refuse to learn from experience.

Sunday, January 06, 2013

Is Economics Divided into Warring Ideological Camps?

I spent quite a bit of time with Noah Smith at the ASSA meetings. At one point, we were at the St. Louis Fed reception and -- since he has no fear -- I suggested that he tell Randall Wright how well New Keynesian models work, which he did. I assumed he'd get a strong taste of the divide in macroeconomics:

Is economics divided into warring ideological camps?, by Noah Smith: This week I went to the American Economic Association's annual meeting, which was held in sunny San Diego, CA. I went to quite a number of interesting sessions, mostly on behavioral economics and finance. What an exciting field!
But anyway, I also went to an interesting session called "What do economists think about major public policy issues?" There were two papers presented, both of which were extremely relevant for much of the debate going on in the econ blogosphere.
The first paper, by Roger Gordon and Gordon Dahl of UC San Diego (aside: now I want to co-author with a guy whose last name is "Noah"!), was called "Views among Economists: Professional Consensus or Point-Counterpoint?" Gordon & Dahl surveyed 41 top economists about their views on 81 policy issues, and tried to determine A) how much disagreement there was, and B) how much disagreement was due to political ideology.
They found that top economists agree about a lot of things. ... On some other issues, opinion was all over the place. Gordon and Dahl also found that the differences that did exist couldn't easily be tied to individual characteristics like gender, experience working in Washington, etc. A panel discussant, Monika Piazzesi, did some further statistical analysis to show that the surveyed economists didn't clump up into "liberal" and "conservative" clusters. 
Conclusion: Economics, at least at the elite level, isn't divided into two warring ideological camps.
That doesn't mean there is no politicization. Justin Wolfers ... ranked the 41 top economists on a liberal/conservative scale according to his own intuition, and found that the economists he intuitively felt were liberal were more likely to support fiscal stimulus, and the conservatives less. He found a few other seemingly partisan differences this way, though not many. (Of course, one has to be careful with this type of analysis; if your ideas of who's "liberal" and who's "conservative" are formed by who supports stimulus and who opposes it, then of course you're going to see this type of effect!)
And of course, it's worth noting that the survey had a small sample, and included only "top" economists at major U.S. universities. There might be "long tails" of ideological bias lower down the prestige scale.
Paul Krugman, who was on the panel, suggested that politicization is mostly confined to the macro field. But even on the question of stimulus, most of the surveyed economists (80%) agreed that Obama's 2009 stimulus boosted output and employment (though fewer agreed that this boost was worth the long-term costs). So it seems that the few top economists who a few years ago were loudly saying that stimulus couldn't possibly work - Bob Lucas, Robert Barro, Gene Fama, etc. - were just a very vocal small minority.
These results surprised me. I'm so used to seeing top macroeconomists tangling with each other... And I had often heard that the appeal of certain classes of macro models - for example, RBC - came from their conservative policy implications. 
So maybe I've been wrong all this time! Or maybe there was more politicization of macro back in the 70s and 80s? 
Or maybe there is still politicization, but the economics profession has just shifted decisively to the center-left? After all, as of 2012, the consensus favorite modeling approach among pure macro people seems to be New Keynesian models of the type preferred by Krugman, not RBC-type models of the type supported by Bob Lucas, Robert Barro, and other "new classical" economists back in the 1980s. It could be that nowadays most economists are - as one person on the panel put it - "market-hugging Democrats". (Or it could be that New Keynesian models simply won the war of ideas. Or both.)
I'm not sure, but Gordon & Dahl's paper is definitely making me question my beliefs...

Saturday, December 29, 2012

'Is Academic Macroeconomics Flourishing?'

Simon Wren-Lewis continues the conversation on the state of academic macroeconomics:

Is academic macroeconomics flourishing?, by Simon Wren-Lewis: How do you judge the health of an academic discipline? Is macroeconomics rotten or flourishing? ...[A]cademic macroeconomics appears all over the place, with strong disputes between alternative schools.
Is this because the evidence in macroeconomics is so unclear that it becomes very difficult to judge different theories? I think the inexact nature of economics is a necessary condition for the lack of an academic consensus in macro, but it is not sufficient. (Mark Thoma has a recent post on this.) Consider monetary policy. I would argue that we have made great progress in both the analysis and practice of monetary policy over the last forty years. One important reason for that progress is the existence of a group that is often neglected - macroeconomists working in central banks.
Unlike their academic counterparts, the primary goal of these economists is not to innovate, but to examine the evidence and see what ideas work. The framework that most of these economists find most helpful is the New NeoClassical Synthesis, or equivalently New Keynesian theory. As a result, it has become the dominant paradigm in analyzing monetary policy.
That does not mean that every macroeconomist looking at monetary policy has to be a New Keynesian, or that central banks ignore other approaches. It is important that this policy consensus should be continually questioned, and part of a healthy academic discipline is that the received wisdom is challenged. However, it has to be acknowledged that policymakers who look at the evidence day in and day out believe that New Keynesian theory is the most useful framework currently around. I have no problem with academics saying ‘I know this is the consensus, but I think it is wrong’. However to say ‘the jury is still out' on whether prices are sticky is wrong. The relevant jury came to a verdict long ago.
It is obvious that when it comes to using fiscal policy in short term macroeconomic stabilization there can be no equivalent claim to progress or consensus. The policy debates we have today do not seem to have advanced much since when Keynes was alive. From one perspective this contrast is deeply puzzling. The science of fiscal policy is not inherently more complicated. ...
What has been missing with fiscal policy has been the equivalent of central bank economists whose job depends on taking an objective view of the evidence and doing the best they can with the ideas that academic macroeconomics provides. This group does not exist because the need to use fiscal policy for short term macroeconomic stabilization is occasional either in terms of time (when the Zero Lower Bound applies) or space (countries within the Eurozone). As a result, when fiscal policy was required to perform a stabilization role, policymakers had to rely on the academic community for advice, and here macroeconomics clearly failed. Pretty well any outside observer would describe its performance as rotten.
The contrast between monetary and fiscal policy tells us that this failure is not an inevitable result of the paucity of evidence in macroeconomics. I think it has a lot more to do with the influence of ideology, and the importance of what I have called the anti-Keynesian school that is a legacy of the New Classical revolution. The reasons why these influences are particularly strong when it comes to fiscal policy are fairly straightforward.
Two issues remain unclear for me. The first is how extensive this ideological bias is. Is the over dominance of the microfoundations approach related to the fact that different takes on the evidence have an unfortunate Keynesian bias? Second, is the degree of ideological bias in macro generic, or is it in part contingent on the particular historical circumstances of the New Classical revolution? These questions are important in thinking how this bias can be overcome.

When people ask if evidence matters in economics, I often point to the debate over the New Classical model's prediction that only unexpected changes in monetary policy matter for economic activity. These models, with their prediction that expected changes in monetary policy are neutral, cleverly allowed New Classical economists to explain the correlations between money, output, and prices in the data without admitting that systematic policy mattered. Thus, these models supported the ideological convictions of many on the right -- government intervention can make things worse, but not better. (Unexpected policy shocks push the economy away from the optimal outcome, so the key was to minimize unexpected policy shocks. This led to things like the push for transparency so that people would anticipate, as much as possible, actual policy moves.)

At first, the evidence seemed to support these models (e.g. Barro's empirical work), but as the evidence accumulated it eventually became clear that this prediction was wrong. Mishkin provided key evidence against these models through his academic work (see, for example, his book A Rational Expectations Approach to Macroeconometrics: Testing Policy Ineffectiveness and Efficient-Markets Models), so I am not as convinced as Simon Wren-Lewis that the difference between monetary and fiscal policy is due solely to the existence of technocratic, mostly non-ideological central bank economists letting the evidence take them where it may. That certainly mattered, but is seems there was more to it than this.

The evidence that Mishkin and others provided was a key reason these models were rejected (it was also difficult to simultaneously explain the magnitude and duration of business cycles with unexpected monetary shocks as the sole driving force), but when it comes to fiscal policy, as noted above, evidence has not trumped ideology to the same degree. One of the reasons for this, I think, is that it's difficult to find clear fiscal policy experiments in the data to evaluate. And when we do (e.g. wars), it's difficult to know if the results will hold at other times. But I can't really disagree with the hypothesis that if an institution like the Fed existed for fiscal policy, there would be a much bigger demand for this information, and that demand would have produced a much larger supply of evidence.

But I am not so sure the difference is "central bank economists whose job depends on taking an objective view of the evidence" so much as it is that these institutions produce a demand for this type of research, and academics respond by supplying the information that central banks need. So the question for me is whether it's the lack of ideology of central bank economists (many of whom are academics), or the fact that their existence creates a large demand for this type of information. Maybe it's both.

Friday, December 28, 2012

Taylor Rules and NGDP Targets: Skeptical Davids

One of the big, current, passionate debates within monetary policy is the relative effectiveness of Taylor Rules versus nominal GDP targeting (e.g. see here). Which of the two does a better job of stabilizing the economy?

If you want to argue against nominal GDP targeting, David Altig of the Atlanta Fed has some ammunition for you. Here's his conclusion:

Nominal GDP Targeting: Still a Skeptic, macroblog: ... To summarize my concerns, the Achilles' heel of nominal GDP targeting is that it provides a poor nominal anchor in an environment in which there is great uncertainty about the path of potential real GDP. As I noted in my earlier post, there is historical justification for that concern.
Basically, anyone puzzling through how demographics are affecting labor force participation rates, how technology is changing the dynamics of job creation, or how policy might be altering labor supply should feel some humility about where potential GDP is headed. For me, a lack of confidence in the path of real GDP takes a lot of luster out of the idea of a nominal GDP target.

Taylor rule skeptics can turn to David Andolfatto of the St. Louis Fed:

On the perils of Taylor rules. macromania: In the Seven Faces of "The Peril" (2010), St. Louis Fed president Jim Bullard speculated on the prospect of the U.S. falling into a Japanese-style deflationary outcome. His analysis was built on an insight of Benhabib, Schmitt-Grohe, and Uribe (2001) in The Perils of Taylor Rules.

These authors (BSU) showed that if monetary policy is conducted according to a Taylor rule, and if there is a zero lower bound (ZLB) on the nominal interest rate, then there are generally two steady-state equilibria. In one equilibrium--the "intended" outcome--the nominal interest rate and inflation rate are on target. In the other equilibrium--the "unintended" outcome--the nominal interest rate and inflation rate are below target--the economy is in a "liquidity trap."

As BSU stress, the multiplicity of outcomes occurs even in economies where prices are perfectly flexible. All that is required are three (non-controversial) ingredients: [1] a Fisher equation; [2] a Taylor rule; and [3] a ZLB.

Back in 2010, I didn't take this argument very seriously. In part it was because the so-called "unintended" outcome was more efficient than than the "intended" outcome (at least, in the version of the model with flexible prices). To put things another way, the Friedman rule turns out to be good policy in a wide class of models. I figured that other factors were probably more important for explaining the events unfolding at that time.

Well, maybe I was a bit too hasty. Let me share with you my tinkering with a simple OLG model... Unfortunately, what follows is a bit on the wonkish side...

[My comments on this topic are highlighted in the first link, i.e. the one to David Altig's post at macroblog.]

Thursday, December 27, 2012

Will Macroeconomists Ever Agree?

Kevin Drum wonders if macroeconomists will ever be able to agree:

The part I can't figure out is why there's so much contention even within the field. In physics and climate science, the cranks are almost all nonspecialists with an axe to grind. Actual practitioners agree pretty broadly on at least the basics. But in macroeconomics you don't have that. There are still polar disagreements among top names on some of the most basic questions. Even given the complexity of the field, that's a bit of a mystery. It's understandable that economics is a more politicized field than physics, but in practice it seems to be almost 100 percent politicized, with the battles fought out by streams of Greek letters demonstrating, as Matt says, just about anything. I wonder if this is ever likely to change? Or will changes in the real world always outpace our ability to build consensus on how the economy actually works?

I took a shot at answering this in April 2011:

... Why can’t economists tell us what happens when government spending goes up or down, taxes change, or the Fed changes monetary policy? The stumbling block is that economics is fundamentally a non-experimental science, particularly in the realm of macroeconomics. Unlike disciplines such as physics, we can't go into the laboratory and rerun the economy again and again under different conditions to measure, say, the average effect of monetary and fiscal policy. We only have one realization of the macroeconomy to use to answer important policy questions, and that limits the precision of the answers we can give. In addition, because the data are historical rather than experimental, we cannot look at the relationships among a set of variables in isolation while holding all the other variables constant as you might do in a lab and this also reduces the precision of our estimates.
Because we only have a single realization of history rather than laboratory data to investigate economic issues, macroeconomic theorists have full knowledge of past data as they build their models. It would be a waste of time to build a model that doesn't fit this one realization of the macroeconomy, and fit it well, and that is precisely what has been done. Unfortunately, there are two models that fit the data, and the two models have vastly different implications for monetary and fiscal policy. ... [This leads to passionate debates about which model is best.]
But even if we had perfect models and perfect data, there would still be uncertainties and disagreements over the proper course of policy. Economists are hindered by the fact that people and institutions change over time in a way that the laws of physics do not. Thus, even if we had the ability to do controlled and careful experiments, there is no guarantee that what we learn would remain valid in the future.
Suppose that we somehow overcome every one of these problems. Even then, disagreements about economic policy would persist in the political arena. Even with full knowledge about how, say, a change in government spending financed by a tax increase will affect the economy now and in the future, ideological differences across individuals will lead to different views on the net social value of these policies. Those on the left tend to value the benefits higher, and place less weight on the costs than those on the right and this leads to fundamental, insoluble differences over the course of economic policy. ...
Progress in economics may someday narrow the partisan divide over economic policy, but even perfect knowledge about the economy won’t eliminate the ideological differences that are the source of so much passion in our political discourse.

A follow-up post in February empahsizes the point that it is not at all clear that the strong divides in economics can be settled with data, but it's not completely hopeless:

...the ability to choose one model over the other is not quite as hopeless as I’ve implied. New data and recent events like the Great Recession push these models into unchartered territory and provide a way to assess which model provides better predictions. However, because of our reliance on historical data this is a slow process – we have to wait for data to accumulate – and there’s no guarantee that once we are finally able to pit one model against the other we will be able to crown a winner. Both models could fail...

I think the Great recession has, for example, provided evidence that the NK model provides a better explanation of events than its competitors, but it is far from a satisfactory construction and it would be hard to call its forecasting and explanatory abilities a success.

Here's another post from the past (Sept. 2009) on this topic:

... There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available (as to how far this kind of "eclecticism" will get you in academia, I'll just note that this is exactly the advice Mishkin gives in his textbook on monetary theory and policy).
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price rigidities of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the thorough analysis that is needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could, and the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

Part of the disagreement is over the ability of this approach -- using an older model guided by newer insights (e.g. that expectations of future output matter for the "IS curve") -- to deliver reliable answers and policy prescriptions.

More on this from another past post (March 2009):

Models are built to answer questions, and the models economists have been using do, in fact, help us find answers to some important questions. But the models were not very good (at all) at answering the questions that are important right now. They have been largely stripped of their usefulness for actual policy in a world where markets simply break down.
The reason is that in order to get to mathematical forms that can be solved, the models had to be simplified. And when they are simplified, something must be sacrificed. So what do you sacrifice? Hopefully, it is the ability to answer questions that are the least important, so the modeling choices that are made reveal what the modelers though was most and least important.
The models we built were very useful for asking whether the federal funds rate should go up or down a quarter point when the economy was hovering in the neighborhood of full employment ,or when we found ourselves in mild, "normal" recessions. The models could tell us what type of monetary policy rule is best for stabilizing the economy. But the models had almost nothing to say about a world where markets melt down, where prices depart from fundamentals, or when markets are incomplete. When this crisis hit, I looked into our tool bag of models and policy recommendations and came up empty for the most part. It was disappointing. There was really no choice but to go back to older Keynesian style models for insight.
The reason the Keynesian model is finding new life is that it specifically built to answer the questions that are important at the moment. The theorists who built modern macro models, those largely in control of where the profession has spent its effort in recent decades,; did not even envision that this could happen, let alone build it into their models. Markets work, they don't break down, so why waste time thinking about those possibilities.
So it's not the math, the modeling choices that were made and the inevitable sacrifices to reality that entails reflected the importance those making the choices gave to various questions. We weren't forced to this end by the mathematics, we asked the wrong questions and built the wrong models.
New Keynesians have been trying to answer: Can we, using equilibrium models with rational agents and complete markets, add frictions to the model - e.g. sluggish wage and price adjustment - you'll see this called "Calvo pricing" - in a way that allows us to approximate the actual movements in key macroeconomic variables of the last 40 or 50 years.
Real Business Cycle theorists also use equilibrium models with rational agents and complete markets, and they look at whether supply-side shocks such as shocks to productivity or labor supply can, by themselves, explain movements in the economy. They largely reject demand-side explanations for movements in macro variables.
The fight - and main question in academics - has been about what drives macroeconomic variables in normal times, demand-side shocks (monetary policy, fiscal policy, investment, net exports) or supply-side shocks (productivity, labor supply). And it's been a fairly brutal fight at times - you've seen some of that come out during the current policy debate. That debate within the profession has dictated the research agenda.
What happens in non-normal times, i.e. when markets break down, or when markets are not complete, agents are not rational, etc., was far down the agenda of important questions, partly because those in control of the journals, those who largely dictated the direction of research, did not think those questions were very important (some don't even believe that policy can help the economy, so why put effort into studying it?).
I think that the current crisis has dealt a bigger blow to macroeconomic theory and modeling than many of us realize.

Here's yet another past post (August 2009) on the general topic of the usefulness of macroeconomic models, though I'm not quite as bullish on the ability of existing models to provide guidance as I was when I wrote this. The point is that although many people use forecasting ability as a metric to measure the usefulness of models (because where the economy is headed is the most improtant question to them), that's not the only use of these models:

Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. ...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur. 
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place. ...
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.

Thursday, December 20, 2012

'Missing the Point in the Economists' Debate'

More on the macro wars. This is from Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State, by Roman Frydman & Michael D. Goldberg:
... To be sure, the upswing in house prices in many markets around the country in the 2000s did reach levels that history and the subsequent long downswings tell us were excessive. But, as we show in Part II, such excessive fluctuations should not be interpreted to mean that asset-price swings are unrelated to fundamental factors. In fact, even if an individual is interested only in short-term returns—a feature of much trading in many markets—the use of data on fundamental factors to forecast these returns is extremely valuable. And the evidence that news concerning a wide array of fundamentals plays a key role in driving asset-price swings is overwhelming.[16]
Missing the Point in the Economists’ Debate
Economists concluded that fundamentals do not matter for asset-price movements because they could not find one overarching relationship that could account for long swings in asset prices. The constraint that economists should consider only fully predetermined accounts of outcomes has led many to presume that some or all participants are irrational, in the sense that they ignore fundamentals altogether. Their decisions are thought to be driven purely by psychological considerations.
The belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.
For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.
There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors,[11] differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.[18]
Behavioral economics provides a case in point. After uncovering massive evidence that the contemporary economics’ standard of rationality fails to capture adequately how individuals actually make decisions, the only sensible conclusion to draw was that this standard was utterly wrong. Instead, behavioral economists, applying a variant of Brecht’s dictum, concluded that individuals are irrational.[19]
To justify that conclusion, behavioral economists and nonacademic commentators argued that the standard of rationality based on the Rational Expectations Hypothesis works—but only for truly intelligent investors. Most individuals lack the abilities needed to understand the future and correctly compute the consequences of their decisions.[20]
In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever (for further discussion, see Chapters 3 and 4). Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.
The Distorted Language of Economic Discourse
It is often remarked that the problem with economics is its reliance on mathematical apparatus. But our criticism is not focused on economists’ use of mathematics. Instead, we criticize contemporary portrayal of the market economy as a mechanical system. Its scientific pretense and the claim that its conclusions follow as a matter of straightforward logic have made informed public discussion of various policy options almost impossible.
Doubters have often been made to seem as unreasonable as those who deny the theory of evolution or that the earth is round. Indeed, public debate is further distorted by the fact that economists formalize notions like “rationality” or “rational markets” in ways that have little or no connection to how non-economists understand these terms. When economists invoke rationality to present or legitimize their public-policy recommendations, non-economists interpret such statements as implying reasonable behavior by real people. In fact, as we discuss extensively in this book, economists’ formalization of rationality portrays obviously irrational behavior in the context of real-world markets.
Such inversions of meaning have had a profound impact on the development of economics itself. For example, having embraced the fully predetermined notion of rationality, behavioral economists proceeded to search for reasons, mostly in psychological research and brain studies, to explain why individual behavior is so grossly inconsistent with that notion—a notion that had no connection with reasonable real-world behavior in the first place.
Moreover, as we shall see, the idea that economists can provide an overarching account of markets, which has given rise to fully predetermined rationality, misses what markets really do. ...
Footnotes
16 See Chapters 7-9 for an extensive discussion of the role of fundamentals in driving price swings in asset markets and their interactions with psychological factors.
17 For example, in discussing the importance of the connection between the financial system and the wider economy for understanding the crisis and thinking about reform, Krugman endorses the approach taken by Bernanke and Gertler. (For an overview of these models, see Bernanke et al., 1999.) However, as pioneering as these models are in incorporating the financial sector into macroeconomics, they are fully predetermined and based on the Rational Expectations Hypothesis. As such, they suffer from the same fundamental flaws that plague other contemporary models. When used to analyze policy options, these models presume not only that the effects of contemplated policies can be fully pre-specified by a policymaker, but also that nothing else genuinely new will ever happen. Supposedly, market participants respond to policy changes according to the REH-based forecasting rules. See footnote 3 in the Introduction and Chapter 2 for further discussion.
18 The convergence in contemporary macroeconomics has become so striking that by now the leading advocates of both the “freshwater” New Classical approach and the “saltwater” New Keynesian approach, regardless of their other differences, extol the virtues of using the Rational Expectations Hypothesis in constructing contemporary models. See Prescott (2006) and Blanchard (2009). It is also widely believed that reliance on the Rational Expectations Hypothesis makes New Keynesian models particularly useful for policy analysis by central banks. See footnote 7 in this chapter and Sims (2010). For further discussion, see Frydman and Goldberg (2008).
19 Following the East German government’s brutal repression of a worker uprising in 1953, Bertolt Brecht famously remarked, “Wouldn’t it be easier to dissolve the people and elect another in their place?”
20 Even Simon (1971), a forceful early critic of economists’ notion of rationality, regarded it as an appropriate standard of decision-making, though he believed that it was unattainable for most people for various cognitive and other reasons. To underscore this view, he coined the term “bounded rationality” to refer to departures from the supposedly normative benchmark.

The introduction to this book might also be of interest:

Rethinking Expectations: The Way Forward for Macroeconomics, Edited by Roman Frydman & Edmund S. Phelps [with entries by Philippe Aghion, Sheila Dow, George W. Evans, Roger E. A. Farmer, Roman Frydman, Michael D. Goldberg, Roger Guesnerie, Seppo Honkapohja, Katarina Juselius, Enisse Kharroubi, Blake LeBaron, Edmund S. Phelps, John B. Taylor, Michael Woodford, and Gylfi Zoega ].

The introduction is here: Which Way Forward for Macroeconomics and Policy Analysis?.