Category Archive for: Macroeconomics [Return to Main]

Tuesday, August 13, 2013

Friedman's Legacy: The New Monetarist's View

I guess we should give the New Monetarists a chance to weigh in on Milton Friedman's legacy and influence (their name -- New Monetarists -- should give you some idea where this is headed...I cut the specific arguments short, but they can be found at the original post):

Friedman's Legacy, by Stephen Williamson, New Monetarist Economics: I'm not sure why, but there has been a lot of blogosphere writing on Milton Friedman recently... Randy Wright once convinced me that we should call ourselves New Monetarists, and we wrote a couple of papers (this one, and this one) in which we try to get a grip on what that means. As New Monetarists, we think we have something to say about Friedman.

We can find plenty of faults in Friedman's ideas, but those ideas - reflected in Friedman's theoretical and empirical work - are deeply embedded in much of what we do as economists in the 21st century. By modern standards, Friedman was a crude economic theorist, but he used the simple tools he had available to develop deep ideas that were later fleshed out in fully-articulated economic models. His empirical work was highly influential and serves as a key reference point for some sub-fields in economics. Some examples:

1. Permanent Income Theory...

2. The Friedman rule: Don't confuse this with the constant money growth rule, which comes from "The Role for Monetary Policy." The "Friedman rule" is the policy rule in the "Optimum Quantity of Money" essay. Basically, the nominal interest rate reflects a distortion. Eliminating that distortion requires reducing the nominal interest rate to zero in all states of the world, and that's what monetary policy should be aimed at doing... We can think of plenty of good reasons why optimal monetary policy could take us away from the Friedman rule in practice, but whenever someone makes an argument for some monetary policy rule, we have to first ask the question: why isn't that rule the Friedman rule? The Friedman rule is fundamental in monetary theory.

3. Monetary history: Friedman and Schwartz's "Monetary History of the United States" was monumental. ...

4. Policy rules: The rule that Friedman wanted central banks to follow was not the Friedman rule, but a constant-money-growth rule... Friedman was successful in getting the rule adopted by central banks in the 1970s and 1980s, but the rule was a practical failure, for reasons that are well-understood. But Friedman got macroeconomists and policymakers thinking about policy rules and how they work. Out of that thinking came ideas about central bank commitment, Taylor rules, inflation targeting, nominal GDP targeting, thresholds, etc., that form the basis for modern analysis of central bank policy.

5. Money and Inflation: ... Friedman played a key role in convincing economists and policymakers that central banks could, and should, control inflation. That seems as natural today as saying that rain falls from the sky, and that's part of Friedman's influence.

6. Narrow banking: I tend to think this was one of Friedman's bad ideas, but it's been very influential. Friedman advocated a 100% reserve requirement in "A Program for Monetary Stability." ...

6. Counterpoint to Keynesian economics: Some people seem to think that Friedman was actually a Keynesian at heart, but he sure got on Tobin's nerves. Criticism is important - it helps to prevent and root out lazy science. Old Keynesian economics was probably much better - e.g. there would have been no "neoclassical synthesis" - because of Friedman.

If anyone wants to argue that Friedman is now unimportant for modern economics, that's like saying Bob Dylan is unimportant for modern music. Today, Bob Dylan is quite willing to climb on a stage and perform with a world-class group of musicians - but it's truly pathetic. Nevertheless, Bob Dylan doesn't get booed off the stage today, because people recognize his importance. In the 1960s, he got people riled up, everyone paid attention, and the world is much different today than it would have been if he had not done the work he did.

Wednesday, August 07, 2013

(1) Numerical Methods, (2) Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman

Robert Waldmann:

...Another thing, what about numerical methods?  Macro was totally taken over by computer simulations. This liberated it (so that anything could happen) but also ruined the fun. When computers were new and scary, simulation based macro was scary and high status. When everyone can do it, setting up a model and simulating just doesn't demonstrate brains as effectively as finding one of the two or three special cases with closed form solutions and then presenting them. Also simulating unrealistic models is really pointless. People end up staring at the computer output and trying to think up stories which explain what went on in the computer. If one is reduced to that, one might as well look at real data. Models which can't be solved don't clarify thought. Since they also don't fit the data, they are really truly madly useless.

And one more:

Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman: Thoma Bait
I might as well be honest. I am posting this here rather than at rjwaldmann.blogspot.com , because I think it is the sort of thing to which Mark Thoma links and my standing among the bears is based entirely on the fact that Thoma occasionally links to me.
I think that Pigou, Samuelson, Solow and Friedman all assumed that the marginal propensity to consume out of wealth must, on average, be higher for nominal creditors than for nominal debtors. I think this is a gross error which shows how the representative consumer (invented by Samuelson) had done devastating damage already by 1960.
The topic is the Pigou effect versus the liquidity trap. ...

Guess I should send you there to read it.

Saturday, June 29, 2013

'DSGE Models and Their Use in Monetary Policy'

Mike Dotsey at the Philadelphia Fed:

DSGE Models and Their Use in Monetary Policy: The past 10 years or so have witnessed the development of a new class of models that are proving useful for monetary policy: dynamic stochastic general equilibrium (DSGE) models. The pioneering central bank, in terms of using these models in the formulation of monetary policy, is the Sveriges Riksbank, the central bank of Sweden. Following in the Riksbank’s footsteps, a number of other central banks have incorporated DSGE models into the monetary policy process, among them the European Central Bank, the Norge Bank (Norwegian central bank), and the Federal Reserve.
This article will discuss the major features of DSGE models and why these models are useful to monetary policymakers. It will indicate the general way in which they are used in conjunction with other tools commonly employed by monetary policymakers. ...

Saturday, June 22, 2013

'Debased Economics'

I need a quick post today, so I'll turn to the most natural blogger I can think of, Paul Krugman:

Debased Economics: John Boehner’s remarks on recent financial events have attracted a lot of unfavorable comment, and they should. ... I mean, he’s the Speaker of the House at a time when economic issues are paramount; shouldn’t he have basic familiarity with simple economic terms?
But the main thing is that he’s clinging to a story about monetary policy that has been refuted by experience about as thoroughly as any economic doctrine of the past century. Ever since the Fed began trying to respond to the financial crisis, we’ve had dire warnings about looming inflationary disaster. When the GOP took the House, it promptly called Bernanke in to lecture him about debasing the dollar. Yet inflation has stayed low, and the dollar has remained strong — just as Keynesians said would happen.
Yet there hasn’t been a hint of rethinking from leading Republicans; as far as anyone can tell, they still get their monetary ideas from Atlas Shrugged.
Oh, and this is another reminder to the “market monetarists”, who think that they can be good conservatives while advocating aggressive monetary expansion to fight a depressed economy: sorry, but you have no political home. In fact, not only aren’t you making any headway with the politicians, even mainstream conservative economists like Taylor and Feldstein are finding ways to advocate tighter money despite low inflation and high unemployment. And if reality hasn’t dented this dingbat orthodoxy yet, it never will.

I'll be offline the rest of today ...

Sunday, June 02, 2013

The Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere

Peter Dorman would like to know if he's wrong:

Why You Don’t See the Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere: Introductory textbooks are supposed to give you simplified versions of the models that professionals use in their own work. The blogosphere is a realm where people from a range of backgrounds discuss current issues often using simplified concepts so everyone can be on the same page.
But while the dominant framework used in introductory macro textbooks is aggregate supply—aggregate demand (AS-AD), it is almost never mentioned in the econ blogs. My guess is that anyone who tried to make an argument about current macropolicy using an AS-AD diagram would just invite snickers. This is not true on the micro side, where it’s perfectly normal to make an argument with a standard issue, partial equilibrium supply and demand diagram. What’s going on here?
I’ve been writing the part of my textbook where I describe what happened in macro during the period from the mid 70s to the mid 00s, and part of the story is the rise of textbook AS-AD. Here’s the line I take:
The dominant macro model, now crystallized in DSGE, is much too complex for intro students. It is based on intertemporal optimization and general equilibrium theory. There is no possible way to explain it to students in their first exposure to economics. But the mainstream has rejected the old income-expenditure models that graced intro texts in the 1970s and were, in skeleton form, the basis for the forecasting models used back in those days. So what to do?
The solution has been to use AS-AD as a placeholder. It allows instructors to talk about both prices and quantities in a rough market context. By putting Y on one axis and P on another, you can locate any macroeconomic outcome in the upper-right quadrant. It gets students “thinking like economists”.
Unfortunately the model is unsound. If you dig into it you find contradictions that can’t be papered over. One example is that the AS curve depends on the idea that input prices for firms systematically lag output prices, but do you really want to argue the theoretical and empirical case for this? Or try the AD assumption that, even as the price level and real output in the economy go up or down, the money supply remains fixed.
That’s why AS-AD is simply a placeholder. It has no intrinsic value as an economic model. No one uses it for policy purposes. It can’t be found in the econ blogs. It’s not a stripped down version of DSGE. Its only role is to occupy student brain cells until the real work of macroeconomic instruction can begin in a more advanced course.
If I’m wrong I’d like to know before I cut off all lines of retreat.

This won't fully answer the question (many DSGE adherents deny the existence of something called an AD curve), but here are a few counterexamples. One from today (here), and two from the past (via Tim Duy here and here).

Update: Paul Krugman comments here.

Wednesday, May 29, 2013

'DSGE + Financial Frictions = Macro that Works?'

This is a brief follow-up to this post from Noah Smith (see this post for the abstract to the Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide paper he discusses):

DSGE + financial frictions = macro that works?: In my last post, I wrote:

So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.

One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened...

This would seem to be a problem.

But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. ... Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.

This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.

The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):

I don't know about you, but to me that looks pretty darn good!
I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful." ...
However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?
It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough.
Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.
So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?

My take on why these models weren't used is a bit different.

My argument all along has been that we had the tools and models to explain what happened, but we didn't understand that this particular combination of models -- standard DSGE augmented by financial frictions -- was the important model to use. As I'll note below, part of the reason was empirical -- the evidenced did matter (though it was not interpreted correctly) -- but the bigger problem was that our arrogance caused us to overlook the important questions.

There are many, many "modules" we can plug into a model to make it do various things. Need to propagate a shock, i.e. make it persist over time? Toss in an adjustment cost of some sort (there are other ways to do this as well). Do you need changes in monetary policy to affect real output? Insert a price, wage, or information friction. And so on.

Unfortunately, adding every possible complication to make one grand model that explains everything is way too hard and complex. That's not possible. Instead, depending upon the questions we ask, we put these pieces together in particular ways to isolate the important relationships, and ignore the more trivial ones. This is the art of model building, to isolate what is important and provide insight into the question of interest.

We could have put the model described above together before the crisis, all of the pieces were there, and some people did things along these lines. But this was not the model most people used. Why? Because we didn't think the question was important. We didn't think that financial frictions were an important feature of modern business cycles because technology and deregulation had mostly solved this problem. If the banking system couldn't collapse, why build and emphasize models that say it will? (The empirical evidence for the financial frictions channel was a bit wobbly, and that was also part of the reason these models were not emphasized. But that evidence was based upon normal times, not deep recessions, and it didn't tell us as much as we thought about the usefulness of models that incorporate financial frictions.)

Ex-post, it's easy to look back and say aha -- this was the model that would have worked. Ex-ante, the problem is much harder. Will the next big recession be driven by a financial collapse? If so, then a model like this might be useful. But what if the shock comes from some other source? Is that shock in the model? When the time comes, will we be asking the right questions, and hence building models that can help to answer them, or will we be focused on the wrong thing -- fighting the last war? We have the tools and techniques to build all sorts of models, but they won't do us much good if we aren't asking the right questions.

How do we do that? We must have a strong sense of history, I think, at a minimum be able to look back and understand how various economic downturns happened and be sure those "modules" are in the baseline model. And we also need to have the humility to understand that we probably haven't progressed so much that it (e.g. a financial collapse) can't happen again. History alone is not enough, of course, new things can always happen -- things where history provides little guidance -- but we should at least incorporate things we know can be problematic.

It wasn't our tools and techniques that failed us prior to the Great Recession. It was our arrogance, our belief that we had solved the problem of financial meltdowns through financial innovation, deregulation, and the like that closed our eyes to the important questions we should have been asking. We are asking them now, but is that enough? What else should we be asking?

'Inflation in the Great Recession and New Keynesian Models'

DSGE models are "surprisingly accurate":

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide: It has been argued that existing DSGE models cannot properly account for the evolution of key macroeconomic variables during and following the recent Great Recession, and that models in which inflation depends on economic slack cannot explain the recent muted behavior of inflation, given the sharp drop in output that occurred in 2008-09. In this paper, we use a standard DSGE model available prior to the recent crisis and estimated with data up to the third quarter of 2008 to explain the behavior of key macroeconomic variables since the crisis. We show that as soon as the financial stress jumped in the fourth quarter of 2008, the model successfully predicts a sharp contraction in economic activity along with a modest and more protracted decline in inflation. The model does so even though inflation remains very dependent on the evolution of both economic activity and monetary policy. We conclude that while the model considered does not capture all short-term fluctuations in key macroeconomic variables, it has proven surprisingly accurate during the recent crisis and the subsequent recovery. [pdf]

Saturday, May 25, 2013

'The Hangover Theory'

Robert Waldmann's comments on the response to Michael Kinsley remind me of this old article from Paul Krugman (I've posted this before, but it seems like a good time to post it again -- it was written in 1998 and it foreshadows/debunks many of the bad arguments used to justify austerity, etc.):

The Hangover Theory: A few weeks ago, a journalist devoted a substantial part of a profile of yours truly to my failure to pay due attention to the "Austrian theory" of the business cycle--a theory that I regard as being about as worthy of serious study as the phlogiston theory of fire. Oh well. But the incident set me thinking--not so much about that particular theory as about the general worldview behind it. Call it the overinvestment theory of recessions, or "liquidationism," or just call it the "hangover theory." It is the idea that slumps are the price we pay for booms, that the suffering the economy experiences during a recession is a necessary punishment for the excesses of the previous expansion.
The hangover theory is perversely seductive--not because it offers an easy way out, but because it doesn't. It turns the wiggles on our charts into a morality play, a tale of hubris and downfall. And it offers adherents the special pleasure of dispensing painful advice with a clear conscience, secure in the belief that they are not heartless but merely practicing tough love.
Powerful as these seductions may be, they must be resisted--for the hangover theory is disastrously wrongheaded. Recessions are not necessary consequences of booms. They can and should be fought, not with austerity but with liberality--with policies that encourage people to spend more, not less. Nor is this merely an academic argument: The hangover theory can do real harm. Liquidationist views played an important role in the spread of the Great Depression--with Austrian theorists such as Friedrich von Hayek and Joseph Schumpeter strenuously arguing, in the very depths of that depression, against any attempt to restore "sham" prosperity by expanding credit and the money supply. And these same views are doing their bit to inhibit recovery in the world's depressed economies at this very moment.
The many variants of the hangover theory all go something like this: In the beginning, an investment boom gets out of hand. Maybe excessive money creation or reckless bank lending drives it, maybe it is simply a matter of irrational exuberance on the part of entrepreneurs. Whatever the reason, all that investment leads to the creation of too much capacity--of factories that cannot find markets, of office buildings that cannot find tenants. Since construction projects take time to complete, however, the boom can proceed for a while before its unsoundness becomes apparent. Eventually, however, reality strikes--investors go bust and investment spending collapses. The result is a slump whose depth is in proportion to the previous excesses. Moreover, that slump is part of the necessary healing process: The excess capacity gets worked off, prices and wages fall from their excessive boom levels, and only then is the economy ready to recover.
Except for that last bit about the virtues of recessions, this is not a bad story about investment cycles. Anyone who has watched the ups and downs of, say, Boston's real estate market over the past 20 years can tell you that episodes in which overoptimism and overbuilding are followed by a bleary-eyed morning after are very much a part of real life. But let's ask a seemingly silly question: Why should the ups and downs of investment demand lead to ups and downs in the economy as a whole? Don't say that it's obvious--although investment cycles clearly are associated with economywide recessions and recoveries in practice, a theory is supposed to explain observed correlations, not just assume them. And in fact the key to the Keynesian revolution in economic thought--a revolution that made hangover theory in general and Austrian theory in particular as obsolete as epicycles--was John Maynard Keynes' realization that the crucial question was not why investment demand sometimes declines, but why such declines cause the whole economy to slump.
Here's the problem: As a matter of simple arithmetic, total spending in the economy is necessarily equal to total income (every sale is also a purchase, and vice versa). So if people decide to spend less on investment goods, doesn't that mean that they must be deciding to spend more on consumption goods--implying that an investment slump should always be accompanied by a corresponding consumption boom? And if so why should there be a rise in unemployment?
Most modern hangover theorists probably don't even realize this is a problem for their story. Nor did those supposedly deep Austrian theorists answer the riddle. The best that von Hayek or Schumpeter could come up with was the vague suggestion that unemployment was a frictional problem created as the economy transferred workers from a bloated investment goods sector back to the production of consumer goods. (Hence their opposition to any attempt to increase demand: This would leave "part of the work of depression undone," since mass unemployment was part of the process of "adapting the structure of production.") But in that case, why doesn't the investment boom--which presumably requires a transfer of workers in the opposite direction--also generate mass unemployment? And anyway, this story bears little resemblance to what actually happens in a recession, when every industry--not just the investment sector--normally contracts.
As is so often the case in economics (or for that matter in any intellectual endeavor), the explanation of how recessions can happen, though arrived at only after an epic intellectual journey, turns out to be extremely simple. A recession happens when, for whatever reason, a large part of the private sector tries to increase its cash reserves at the same time. Yet, for all its simplicity, the insight that a slump is about an excess demand for money makes nonsense of the whole hangover theory. For if the problem is that collectively people want to hold more money than there is in circulation, why not simply increase the supply of money? You may tell me that it's not that simple, that during the previous boom businessmen made bad investments and banks made bad loans. Well, fine. Junk the bad investments and write off the bad loans. Why should this require that perfectly good productive capacity be left idle?
The hangover theory, then, turns out to be intellectually incoherent; nobody has managed to explain why bad investments in the past require the unemployment of good workers in the present. Yet the theory has powerful emotional appeal. Usually that appeal is strongest for conservatives, who can't stand the thought that positive action by governments (let alone--horrors!--printing money) can ever be a good idea. Some libertarians extol the Austrian theory, not because they have really thought that theory through, but because they feel the need for some prestigious alternative to the perceived statist implications of Keynesianism. And some people probably are attracted to Austrianism because they imagine that it devalues the intellectual pretensions of economics professors. But moderates and liberals are not immune to the theory's seductive charms--especially when it gives them a chance to lecture others on their failings.
Few Western commentators have resisted the temptation to turn Asia's economic woes into an occasion for moralizing on the region's past sins. How many articles have you read blaming Japan's current malaise on the excesses of the "bubble economy" of the 1980s--even though that bubble burst almost a decade ago? How many editorials have you seen warning that credit expansion in Korea or Malaysia is a terrible idea, because after all it was excessive credit expansion that created the problem in the first place?
And the Asians--the Japanese in particular--take such strictures seriously. One often hears that Japan is adrift because its politicians refuse to make hard choices, to take on vested interests. The truth is that the Japanese have been remarkably willing to make hard choices, such as raising taxes sharply in 1997. Indeed, they are in trouble partly because they insist on making hard choices, when what the economy really needs is to take the easy way out. The Great Depression happened largely because policy-makers imagined that austerity was the way to fight a recession; the not-so-great depression that has enveloped much of Asia has been worsened by the same instinct. Keynes had it right: Often, if not always, "it is ideas, not vested interests, that are dangerous for good or evil."

Thursday, May 16, 2013

New Research in Economics: Robust Stability of Monetary Policy Rules under Adaptive Learning

I have had several responses to my offer to post write-ups of new research that I'll be posting over the next few days (thanks!), but I thought I'd start with a forthcoming paper from a former graduate student here at the University of Oregon, Eric Guass:

Robust Stability of Monetary Policy Rules under Adaptive Learning, by Eric Gaus, forthcoming, Southern Economics Journal: Adaptive learning has been used to assess the viability a variety of monetary policy rules. If agents using simple econometric forecasts "learn" the rational expectations solution of a theoretical model, then researchers conclude the monetary policy rule is a viable alternative. For example, Duffy and Xiao (2007) find that if monetary policy makers minimize a loss function of inflation, interest rates, and the output gap, then agents in a simple three equation model of the macroeconomy learn the rational expectations solution. On the other hand, Evans and Honkapohja (2009) demonstrates that this may not always be the case. The key difference between the two papers is an assumption over what information the agents of the model have access to. Duffy and Xiao (2007) assume that monetary policy makers have access to contemporaneous variables, that is, they adjust interest rates to current inflation and output. Evans and Honkapohja (2009) instead assume that agents only can form expectations of contemporaneous variables. Another difference between these two papers is that in Duffy and Xiao (2007) agents use all the past data they have access to, whereas in Evans and Honkapohja (2009) agents use a fixed window of data.
This paper examines several different monetary policy rules under a learning mechanism that changes how much data agents are using. It turns out that as long as the monetary policy makers are able to see contemporaneous endogenous variables (output and inflation) then the Duffy and Xiao (2007) results hold. However, if agents and policy makers use expectations of current variables then many of the policy rules are not "robustly stable" in the terminology of Evans and Honkapohja (2009).
A final result in the paper is that the switching learning mechanism can create unpredictable temporary deviations from rational expectations. This is a rather starting result since the source of the deviations is completely endogenous. The deviations appear in a model where there are no structural breaks or multiple equilibria or even an intention of generating such deviations. This result suggests that policymakers should be concerned with the potential that expectations, and expectations alone, can create exotic behavior that temporarily strays from the REE.

Wednesday, May 08, 2013

What is Wrong (and Right) in Economics?

Dani Rodrik:

What is wrong (and right) in economics?, by Dani Rodrik: The World Economics Association recently interviewed me on the state of economics, inquiring about my views on pluralism in the profession. You can find the result on the WEA's newsletter here (the interview starts on page 9). I reproduce it below. ...

Tuesday, May 07, 2013

Seven Myths about Keynesian Economics

The recent blow-up surrounding Niall Ferguson's comments on Keynes' concern for long-run issues prompted my latest column:

Seven Myths about Keynesian Economics

The claim that Keynesians are indifferent to the long-run is one of many myths about Keynesian economics.

Saturday, May 04, 2013

'Keynes, Keynesians, the Long Run, and Fiscal Policy'

Paul Krugman on how to tell when someone is "pretending to be an authority on economics":

Keynes, Keynesians, the Long Run, and Fiscal Policy: One dead giveaway that someone pretending to be an authority on economics is in fact faking it is misuse of the famous Keynes line about the long run. Here’s the actual quote:

But this long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is long past the ocean is flat again.

As I’ve written before, Keynes’s point here is that economic models are incomplete, suspect, and not much use if they can’t explain what happens year to year, but can only tell you where things will supposedly end up after a lot of time has passed. It’s an appeal for better analysis, not for ignoring the future; and anyone who tries to make it into some kind of moral indictment of Keynesian thought has forfeited any right to be taken seriously. ...

I thought the target of these remarks had forfeited any right to be taken seriously long ago (except, of course and unfortunately, by Very Serious People). [Krugman goes on to tackle several other topics.]

'Microfounded Social Welfare Functions'

This is very wonkish, but it's also very important. The issue is whether DSGE models used for policy analysis can properly capture the relative costs of deviations of inflation and output from target. Simon Wren-Lewis argues -- and I very much agree -- that the standard models are not a very good guide to policy because they vastly overstate the cost of inflation relative to the cost of output (and employment) fluctuations (see the original for the full argument and links to source material):

Microfounded Social Welfare Functions, by Simon Wren-Lewis: More on Beauty and Truth for economists

... Woodford’s derivation of social welfare functions from representative agent’s utility ... can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.

Where does this result come from? ... Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations... Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.

Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.

So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.

Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.

Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero...

“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”

As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.


[1] Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.

Friday, May 03, 2013

Romer and Stiglitz on the State of Macroeconomics

Two essays on the state of macroeconomics:

First, David Romer argues our recent troubles are an extreme version of an ongoing problem:

... As I will describe, my reading of the evidence is that the events of the past few years are not an aberration, but just the most extreme manifestation of a broader pattern. And the relatively modest changes of the type discussed at the conference, and that in some cases policymakers are putting into place, are helpful but unlikely to be enough to prevent future financial shocks from inflicting large economic harms.
Thus, I believe we should be asking whether there are deeper reforms that might have a large effect on the size of the shocks emanating from the financial sector, or on the ability of the economy to withstand those shocks. But there has been relatively little serious consideration of ideas for such reforms, not just at this conference but in the broader academic and policy communities. ...

He goes on to describe some changes he'd like to see, for example:

I was disappointed to see little consideration of much larger financial reforms. Let me give four examples of possible types of larger reforms:

  • There were occasional mentions of very large capital requirements. For example, Allan Meltzer noted that at one time 25 percent capital for was common for banks. Should we be moving to such a system?
  • Amir Sufi and Adair Turner talked about the features of debt contracts that make them inherently prone to instability. Should we be working aggressively to promote more indexation of debt contracts, more equity-like contracts, and so on?
  • We can see the costs that the modern financial system has imposed on the real economy. It is not immediately clear that the benefits of the financial innovations of recent decades have been on a scale that warrants those costs. Thus, might a much simpler, 1960s- or 1970s-style financial system be better than what we have now?
  • The fact that shocks emanating from the financial system sometimes impose large costs on the rest of the economy implies that there are negative externalities to some types of financial activities or financial structures, which suggests the possibility of Pigovian taxes.

So, should there be substantial taxes on certain aspects of the financial system? If so, what should be taxed – debt, leverage, size, other indicators of systemic risk, a combination, or something else altogether?

Larger-scale solutions on the macroeconomic side ...

After a long discussion, he concludes with:

After five years of catastrophic macroeconomic performance, “first steps and early lessons” – to quote the conference title – is not what we should be aiming for. Rather, we should be looking for solutions to the ongoing current crisis and strong measures to minimize the chances of anything similar happening again. I worry that the reforms we are focusing on are too small to do that, and that what is needed is a more fundamental rethinking of the design of our financial system and of our frameworks for macroeconomic policy.

Second, Joe Stiglitz:

In analyzing the most recent financial crisis, we can benefit somewhat from the misfortune of recent decades. The approximately 100 crises that have occurred during the last 30 years—as liberalization policies became dominant—have given us a wealth of experience and mountains of data. If we look over a 150 year period, we have an even richer data set.
With a century and half of clear, detailed information on crisis after crisis, the burning question is not How did this happen? but How did we ignore that long history, and think that we had solved the problems with the business cycle Believing that we had made big economic fluctuations a thing of the past took a remarkable amount of hubris....

In his lengthy essay, he goes on to discuss:

Markets are not stable, efficient, or self-correcting

  • The models that focused on exogenous shocks simply misled us—the majority of the really big shocks come from within the economy.
  • Economies are not self-correcting.

More than deleveraging, more than a balance sheet crisis: the need for structural transformation

  • The fact that things have often gone badly in the aftermath of a financial crisis doesn’t mean they must go badly.

Reforms that are, at best, half-way measures

  • The reforms undertaken so far have only tinkered at the edges.
  • The crisis has brought home the importance of financial regulation for macroeconomic stability.

Deficiencies in reforms and in modeling

  • The importance of credit
    • A focus on the provision of credit has neither been at the center of policy discourse nor of the standard macro-models.
    • There is also a lack of understanding of different kinds of finance.
  • Stability
  • Distribution

Policy Frameworks

  • Flawed models not only lead to flawed policies, but also to flawed policy frameworks.
  • Should monetary policy focus just on short term interest rates?
  • Price versus quantitative interventions
  • Tinbergen

Stiglitz ends with:

Take this chance to revolutionize flawed models
It should be clear that we could have done much more to prevent this crisis and to mitigate its effects. It should be clear too that we can do much more to prevent the next one. Still, through this conference and others like it, we are at least beginning to clearly identify the really big market failures, the big macroeconomic externalities, and the best policy interventions for achieving high growth, greater stability, and a better distribution of income.
To succeed, we must constantly remind ourselves that markets on their own are not going to solve these problems, and neither will a single intervention like short-term interest rates. Those facts have been proven time and again over the last century and a half.
And as daunting as the economic problems we now face are, acknowledging this will allow us to take advantage of the one big opportunity this period of economic trauma has afforded: namely, the chance to revolutionize our flawed models, and perhaps even exit from an interminable cycle of crises.

Tuesday, April 23, 2013

A New and Improved Macroeconomics

New column:

A New and Improved Macroeconomics, by Mark Thoma: Macroeconomics has not fared well in recent years. The failure of standard macroeconomic models during the financial crisis provided the first big blow to the profession, and the recent discovery of the errors and questionable assumptions in the work of Reinhart and Rogoff further undermined the faith that people have in our models and empirical methods.
What will it take for the macroeconomics profession to do better? ...

Wednesday, April 17, 2013

Empirical Methods and Progress in Macroeconomics

The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn't about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics.

Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit.

There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship. If we could do repeated experiments or compare data across countries (or other jurisdictions) without worrying about the “all else equal assumption” we’d could perhaps sort this out. It would be like repeated experiments. But, unfortunately, there are too many institutional differences and common shocks across countries to reliably treat each country as an independent, all else equal experiment. Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data.

Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions).

There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built. In this regard, when I was a new assistant professor Milton Friedman presented some work at a conference that impressed me quite a bit. He resurrected a theoretical paper he had written 25 years earlier (it was his plucking model of aggregate fluctuations), and tested it against the data that had accumulated in the time since he had published his work. It’s not really fair to test a theory against historical macroeconomic data, we all know what the data say and it would be foolish to build a model that is inconsistent with the historical data it was built to explain – of course the model will fit the data, who would be impressed by that? But a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful (Friedman’s model passed the test using only the newer data).

As a young time-series econometrician struggling with data/degrees of freedom issues I found this encouraging. So what if in 1986 – when I finished graduate school – there were only 28 quarterly observations for macro variables (112 total observations, reliable data on money, which I almost always needed, doesn’t begin until 1959). By, say, the end of 2012 there would be almost double that amount (216 versus 112!!!). Asymptotic (plim-type) results here we come! (Switching to monthly data doesn’t help much since it’s the span of the data – the distance between the beginning and the end of the sample – rather than the frequency the data are sampled that determines many of the “large-sample results”).

By today, I thought, I would have almost double the data I had back then and that would improve the precision of tests quite a bit. I could also do what Friedman did, take really important older papers that give us results “everyone knows” and see if they hold up when tested against newer data.

It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s, and because of this structural break today 1984 is a common starting point for empirical investigations (start dates can be anywhere in the 79-84 range though later dates are more common). Data before this time-period are discarded.

So, here we are 25 years or so later and macroeconomists don’t have any more data at our disposal than we did when I was in graduate school. And if the structure of the economy keeps changing – as it will – the same will probably be true 25 years from now. We will either have to model the structural change explicitly (which isn’t easy, and attempts to model structural beaks often induce as much uncertainty as clarity), or continually discard historical data as time goes on (maybe big data, digital technology, theoretical advances, etc. will help?).

The point is that for a variety of reasons – the lack of experimental data, small data sets, and important structural change foremost among them – empirical macroeconomics is not able to definitively say which competing model of the economy best explains the data. There are some questions we’ve been able to address successfully with empirical methods, e.g., there has been a big change in views about the effectiveness of monetary policy over the last few decades driven by empirical work. But for the most part empirical macro has not been able to settle important policy questions. The debate over government spending multipliers is a good example. Theoretically the multiplier can take a range of values from small to large, and even though most theoretical models in use today say that the multiplier is large in deep recessions, ultimately this is an empirical issue. I think the preponderance of the empirical evidence shows that multipliers are, in fact, relatively large in deep recessions – but you can find whatever result you like and none of the results are sufficiently definitive to make this a fully settled issue.

I used to think that the accumulation of data along with ever improving empirical techniques would eventually allow us to answer important theoretical and policy questions. I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results.

Saturday, March 30, 2013

'The Price Is Wrong'

As noted below, it is a slow day, but this is well worth reading (there's quite a bit more in the original post):

The Price Is Wrong, by Paul Krugman: It’s a slow morning on the economic news front, as we wait for various euro shoes to drop, so I thought I’d share a meditation I’ve been having on the diagnosis and misdiagnosis of the Lesser Depression. ...
So, start with our big problem, which is mass unemployment. Basic supply and demand analysis says that ... prices are supposed to rise or fall to clear markets. So what’s with this apparent massive and persistent excess supply of labor? In general, market disequilibrium is a sign of prices out of whack... The big divide comes over the question of which price is wrong.
As I see it, the whole structural/classical/Austrian/supply-side/whatever side of this debate basically believes that the problem lies in the labor market. ... For some reason, they would argue, wages are too high... Some of them accept the notion that it’s because of downward nominal wage rigidity; more, I think, believe that workers are being encouraged to hold out for unsustainable wages by moocher-friendly programs like food stamps, unemployment benefits, disability insurance, and whatever.
As regular readers know, I find this prima facie absurd — it’s essentially the claim that soup kitchens caused the Great Depression. ...
So what’s the alternative view? It’s basically the notion that the interest rate is wrong — that given the overhang of debt and other factors depressing private demand, real interest rates would have to be deeply negative to match desired saving with desired investment at full employment. And real rates can’t go that negative because expected inflation is low and nominal rates can’t go below zero: we’re in a liquidity trap. ..
There are strong policy implications of these two views. If you think the problem is that wages are too high, your solution is that we need to meaner to workers — cut off their unemployment insurance, make them hungry by cutting off food stamps, so they have no alternative to do whatever it takes to get jobs, and wages fall. If you think the problem is the zero lower bound on interest rates, you think that this kind of solution wouldn’t just be cruel, it would make the economy worse, both because cutting workers’ incomes would reduce demand and because deflation would increase the burden of debt.
What my side of the debate would call for, instead, is a reduction in the real interest rate, if possible, by raising expected inflation; and failing that, more government spending to increase demand and put idle resources to work. ...
So yes, the price is wrong — but it’s a terrible, disastrous mistake to focus on the wrong wrong price.

Why should workers bear the burden of a recession they had nothing to do with causing? We should do our best to protect vulnerable workers and their families, and if it comes at the expense of those who were responsible for the boom and bust, I can live with that (and no, the cause wasn't poor people trying to buy houses -- people on the right who are afraid they will be asked to pay for their poor choices, or who want to pursue an anti-government, do not help the unfortunate with my hard-earned investment income agenda have tried to make this claim, and they are still at it, but it is "prima facie absurd").

Friday, March 08, 2013

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates

Watching John Williams give this paper:

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates, by Eric T. Swanson and John C. Williams, Federal Reserve Bank of San Francisco, January 2013: Abstract The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to many macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, standard macroeconomic theory also implies that private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for the economy, and it is unclear to what extent those yields have been constrained. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by estimating the time-varying high-frequency sensitivity of those interest rates to macroeconomic announcements relative to a benchmark period in which the zero bound was not a concern. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.

Tuesday, March 05, 2013

'Are Sticky Prices Costly? Evidence From The Stock Market'

There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Tuesday, February 19, 2013

Big Data?

Paul Krugman:

Data, Stimulus, and Human Nature, by Paul Krugman: David Brooks writes about the limitations of Big Data, and makes some good points. But he goes astray, I think, when he touches on a subject near and dear to my own data-driven heart:

For example, we’ve had huge debates over the best economic stimulus, with mountains of data, and as far as I know not a single major player in this debate has been persuaded by data to switch sides.

Actually, he’s not quite right there, as I’ll explain in a minute. But it’s certainly true that neither stimulus advocates nor hard-line stimulus opponents have changed their positions. The question is, does this say something about the limits of data — or is it just a commentary on human nature, especially in a highly politicized environment?

For the truth is that there were some clear and very different predictions from each side of the debate... On these predictions, the data have spoken clearly; the problem is that people don’t want to hear..., and the fact that they don’t happen has nothing to do with the limitations of data. ...

That said, if you look at players in the macro debate who would not face huge personal and/or political penalties for admitting that they were wrong, you actually do see data having a considerable impact. Most notably, the IMF has responded to the actual experience of austerity by conceding that it was probably underestimating fiscal multipliers by a factor of about 3.

So yes, it has been disappointing to see so many people sticking to their positions on fiscal policy despite overwhelming evidence that those positions are wrong. But the fault lies not in our data, but in ourselves.

I'll just add that when it comes to the debate over the multiplier and the macroeconomic data used to try to settle the question, the term "Big Data" doesn't really apply. If we actually had "Big Data," we might be able to get somewhere but as it stands -- with so little data and so few relevant historical episodes with similar policies -- precise answers are difficult to ascertain. And it's even worse than that. Let me point to something David Card said in an interview I posted yesterday:

I think many people are concerned that much of the research they see is biased and has a specific agenda in mind. Some of that concern arises because of the open-ended nature of economic research. To get results, people often have to make assumptions or tweak the data a little bit here or there, and if somebody has an agenda, they can inevitably push the results in one direction or another. Given that, I think that people have a legitimate concern about researchers who are essentially conducting advocacy work.

If we had the "Big Data" we need to answer these questions, this would be less of a problem. But with quarterly data from 1960 (when money data starts, you can go back to 1947 otherwise), or since 1982 (to avoid big structural changes and changes in Fed operating procedures), or even monthly data (if you don't need variables like GDP), there isn't as much precision as needed to resolve these questions (50 years of quarterly data is only 200 observations). There is also a lot of freedom to steer the results in a particular direction and we have to rely upon the integrity of researchers to avoid pushing a particular agenda. Most play it straight up, the answers are however they come out, but there are enough voices with agendas -- particularly, though not excusively, from think tanks, etc. -- to cloud the issues and make it difficult for the public to separate the honest work from the agenda based, one-sided, sometimes dishonest presentations. And there are also the issues noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out "right," etc., because the data doesn't provide enough clarity to force them to give up their beliefs (in which they've invested considerable effort).

So I wish we had "Big Data," and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you can't always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).

Despite a few research attempts such as the billion price project, "Little Data" and all the problems that come with it is a better description of empirical macroeconomics.

Monday, February 11, 2013

Phelps on Rational Expectations

Ed Phelps does not like rational expectations:

Expecting the Unexpected: An Interview With Edmund Phelps, by Caroline Baum, Commentary, Bloomberg: ...I talked with [Edmund Phelps] ... about his views on rational expectations...
Q: So how did adaptive expectations morph into rational expectations?
A: The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. ...
Q: And what's the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. ... Roman Frydman has made his career uncovering the impossibility of rational expectations in several contexts. ...
When I was getting into economics in the 1950s, we understood there could be times when a craze would drive stock prices very high. Or the reverse... But now that way of thinking is regarded by the rational expectations advocates as unscientific.
By the early 2000s, Chicago and MIT were saying we've licked inflation and put an end to unhealthy fluctuations –- only the healthy “vibrations” in rational expectations models remained. Prices are scientifically determined, they said. Expectations are right and therefore can't cause any mischief.
At a celebration in Boston for Paul Samuelson in 2004 or so, I had to listen to Ben Bernanke and Oliver Blanchard ... crowing that they had conquered the business cycle of old by introducing predictability in monetary policy making, which made it possible for the public to stop generating baseless swings in their expectations and adopt rational expectations...
Q: And how has that worked out?
A: Not well! ...
[There's more in the full interview.]

Friday, January 25, 2013

'Misinterpreting the History of Macroeconomic Thought'

Simon Wren-Lewis argues that the "crisis view" of change in macroeconomic theory is too simple

Misinterpreting the history of macroeconomic thought, mainly macro: An attractive way to give a broad sweep over the history of macroeconomic ideas is to talk about a series of reactions to crises (see Matthew Klein and Noah Smith). However it is too simple, and misleads as a result. The Great Depression led to Keynesian economics. So far so good. The inflation of the 1970s led to ? Monetarism - well maybe in terms of a few brief policy experiments in the early 1980s, but Monetarist-Keynesian debates were going strong before the 1970s. The New Classical revolution? Well rational expectations can be helpful in adapting the Phillips curve to explain what happened in the 1970s, but I’m not sure that was the main reason why the idea was so rapidly adopted. The New Classical revolution was much more than rational expectations.

The attempt gets really off beam if we try and suggest that the rise of RBC models was a response to the inflation of the 1970s. I guess you could argue that the policy failures of the 1970s were an example of the Lucas critique, and that to avoid similar mistakes macroeconomists needed to develop microfounded models. But if explaining the last crisis really was the prime motivation, would you develop models in which there was no Phillips curve, and which made no attempt to explain the inflation of the 1970s (or indeed, the previous crisis - the Great Depression)?

What the ‘macroeconomic ideas develop as a response to crises’ story leaves out is the rest of economics, and ideology. The Keynesian revolution (by which I mean macroeconomics after the second world war) can be seen as a methodological revolution. Models were informed by theory, but their equations were built to explain the data. Time series econometrics played an essential role. However this appeared to be different from how other areas of the discipline worked. In these other areas of economics, explaining behavior in terms of optimization by individual agents was all important. This created a tension, and a major divide within economics as a whole. Macro appeared quite different from micro.

A particular manifestation of this was the constant question: where is the source of the market failure that gives rise to the business cycle. Most macroeconomists replied sticky prices, but this prompted the follow up question: why do rational firms or workers choose not to change their prices? The way most macroeconomists at the time chose to answer this was that expectations were slow to adjust. It was a disastrous choice, but I suspect one that had very little to do with the nature of Keynesian theory, and rather more to do with the analytical convenience of adaptive expectations. Anyhow, that is another story.

The New Classical revolution was in part a response to that tension. In methodological terms it was a counter revolution, trying to take macroeconomics away from the econometricians, and bring it back to something microeconomists could understand. Of course it could point to policy in the 1970s as justification, but I doubt that was the driving force. I also think it is difficult to fully understand the New Classical revolution, and the development of RBC models, without adding in some ideology. 

Does this have anything to tell us about how macroeconomics will respond to the Great Recession? I think it does. If you bought the ‘responding to the last crisis’ narrative, you would expect to see some sea change, akin to Keynesian economics or the New Classical revolution. I suspect you would be disappointed. While I see plenty of financial frictions being added to DSGE models, I do not see any significant body of macroeconomists wanting to ply their trade in a radically different way. If this crisis is going to generate a new revolution in macroeconomics, where are the revolutionaries? However, if you read the history of macro thought the way I do, then macro crises are neither necessary nor sufficient for revolutions in macro thought. Perhaps there was only one real revolution, and we have been adjusting to the tensions that created ever since.  

Let me follow up on the ideological point with an example. Prior to the New Classical revolution in the 1970s (which, contra some recent descriptions, is different from DSGE models), the people who do not believe that government intervention is bad had a problem. It was very clear in the data that there was a positive correlation between changes in the money supply and changes in employment and real income. Further, though this is harder to establish, the relationship appeared causal. Money causes income, and this allowed government to stabilize the economy.

The (neo)classical model, with its vertical AS curve, could not explain the positive money-income correlation in the data. In the typical classical formulation, so long as prices are perfectly flexible and all markets clear at all points in time, the economy is always in long-run equilibrium. Thus, in these models the prediction is a zero correlation between money and income. But it wasn't zero.

However, a very clever idea from Robert Lucas in the 1970s allowed this correlation to be explained without admitting government can do good, i.e. without admitting that government can stabilize the economy using monetary policy. This is the ideological part -- a way to explain the data without acknowledging a role for government at the same time. I can't say that Lucas approached the problem in this way, i.e. that he started out with the ideological goal of explaining the money-income correlation without allowing a role for government. Maybe it arose in a flash of brilliance completely unconnected to ideological concerns, But I find it hard to explain why this model came about in the form it did without ideology, and the view of government the New Classical model supported surely didn't hurt its acceptance at places like the University of Chicago (as it existed then).

Wednesday, January 09, 2013

'During Periods of Extreme Growth and Decline, Human Behavior is Not the Same

From an interview of MIT's Andrew Lo:

Q: Many people believe that the financial crisis revealed major shortcomings in the discipline of economics, and one of the goals of your book is to consider what economic theory tells us about the links between finance and the rest of the economy. Do you feel that economists understand enough about the nature of financial instability or liquidity crises?
A: I think that the financial crisis was an important wake-up call to all economists that we need to change the way we approach our discipline. While economics has made great strides in modeling liquidity risk, financial contagion, and market bubbles and crashes, we haven't done a very good job of integrating these models into broader macroeconomic policy tools. That's the focus of a lot of recent activity in macro and financial economics and the hope is that we'll be able to do better in the near future.
Q: Let me continue briefly on this thread. One topic has been particularly controversial concerns the efficient-market hypothesis (EMH). Burton Malkiel discusses the issue in his chapter in Rethinking the Financial Crisis, but I wanted to ask your opinion of this idea that EMH fed a hands-off regulatory approach that ignored concerns about faulty asset pricing.
A: There's no doubt that EMH and its macroeconomic cousin, Rational Expectations, played a significant role in how regulators approached their responsibilities. However, we should keep in mind that market efficiency isn't wrong; it's just incomplete. Market participants do behave rationally under normal economic conditions, hence the current regulatory framework does serve a useful purpose during these periods. But during periods of extreme growth and decline, human behavior is not the same, and much of economic theory and regulatory policy does not yet reflect this new perspective of "Adaptive Markets."

Monday, January 07, 2013

The Reason We Lose at Games: Implications for Financial Markets

Something to think about:

The reason we lose at games, EurekAlert: Writing in PNAS, a University of Manchester physicist has discovered that some games are simply impossible to fully learn, or too complex for the human mind to understand.
Dr Tobias Galla from The University of Manchester and Professor Doyne Farmer from Oxford University and the Santa Fe Institute, ran thousands of simulations of two-player games to see how human behavior affects their decision-making.
In simple games with a small number of moves, such as Noughts and Crosses the optimal strategy is easy to guess, and the game quickly becomes uninteresting.
However, when games became more complex and when there are a lot of moves, such as in chess, the board game Go or complex card games, the academics argue that players' actions become less rational and that it is hard to find optimal strategies.
This research could also have implications for the financial markets. Many economists base financial predictions of the stock market on equilibrium theory – assuming that traders are infinitely intelligent and rational.
This, the academics argue, is rarely the case and could lead to predictions of how markets react being wildly inaccurate.
Much of traditional game theory, the basis for strategic decision-making, is based on the equilibrium point – players or workers having a deep and perfect knowledge of what they are doing and of what their opponents are doing.
Dr Galla, from the School of Physics and Astronomy, said: "Equilibrium is not always the right thing you should look for in a game."
"In many situations, people do not play equilibrium strategies, instead what they do can look like random or chaotic for a variety of reasons, so it is not always appropriate to base predictions on the equilibrium model."
"With trading on the stock market, for example, you can have thousands of different stock to choose from, and people do not always behave rationally in these situations or they do not have sufficient information to act rationally. This can have a profound effect on how the markets react."
"It could be that we need to drop these conventional game theories and instead use new approaches to predict how people might behave."
Together with a Manchester-based PhD student the pair are looking to expand their study to multi-player games and to cases in which the game itself changes with time, which would be a closer analogy of how financial markets operate.
Preliminary results suggest that as the number of players increases, the chances that equilibrium is reached decrease. Thus for complicated games with many players, such as financial markets, equilibrium is even less likely to be the full story.

Paul Krugman: The Big Fail

Who should be blamed for the slow recovery?:

The Big Fail by Paul Krugman, Commentary, NY Times: It’s that time again: the annual meeting of the American Economic Association and affiliates... And this year, as in past meetings, there is one theme dominating discussion: the ongoing economic crisis.
This isn’t how things were supposed to be. If you had polled the economists attending this meeting three years ago, most of them would surely have predicted that by now we’d be talking about how the great slump ended, not why it still continues.
So what went wrong? The answer, mainly, is the triumph of bad ideas.
It’s tempting to argue that the economic failures of recent years prove that economists don’t have the answers. But the truth is ... standard economics offered good answers, but political leaders — and all too many economists — chose to forget or ignore what they should have known. ...
A smaller financial shock, like the dot-com bust at the end of the 1990s, can be met by cutting interest rates. But the crisis of 2008 was far bigger, and even cutting rates all the way to zero wasn’t nearly enough.
At that point governments needed to step in, spending to support their economies while the private sector regained its balance. And to some extent that did happen... Budget deficits rose, but this was actually a good thing, probably the most important reason we didn’t have a full replay of the Great Depression.
But it all went wrong in 2010. The crisis in Greece was taken, wrongly, as a sign that all governments had better slash spending and deficits right away. Austerity became the order of the day...
Of the papers presented at this meeting, probably the biggest flash came from one by Olivier Blanchard and Daniel Leigh of the International Monetary Fund. ... For what the paper concludes is not just that austerity has a depressing effect on weak economies, but that the adverse effect is much stronger than previously believed. The premature turn to austerity, it turns out, was a terrible mistake. ...
The really bad news is ... European leaders ... still insist that the answer is even more pain. ... And here in America, Republicans insist that they’ll use a confrontation over the debt ceiling ... to demand spending cuts that would drive us back into recession.
The truth is that we’ve just experienced a colossal failure of economic policy — and far too many of those responsible for that failure both retain power and refuse to learn from experience.

Sunday, January 06, 2013

Is Economics Divided into Warring Ideological Camps?

I spent quite a bit of time with Noah Smith at the ASSA meetings. At one point, we were at the St. Louis Fed reception and -- since he has no fear -- I suggested that he tell Randall Wright how well New Keynesian models work, which he did. I assumed he'd get a strong taste of the divide in macroeconomics:

Is economics divided into warring ideological camps?, by Noah Smith: This week I went to the American Economic Association's annual meeting, which was held in sunny San Diego, CA. I went to quite a number of interesting sessions, mostly on behavioral economics and finance. What an exciting field!
But anyway, I also went to an interesting session called "What do economists think about major public policy issues?" There were two papers presented, both of which were extremely relevant for much of the debate going on in the econ blogosphere.
The first paper, by Roger Gordon and Gordon Dahl of UC San Diego (aside: now I want to co-author with a guy whose last name is "Noah"!), was called "Views among Economists: Professional Consensus or Point-Counterpoint?" Gordon & Dahl surveyed 41 top economists about their views on 81 policy issues, and tried to determine A) how much disagreement there was, and B) how much disagreement was due to political ideology.
They found that top economists agree about a lot of things. ... On some other issues, opinion was all over the place. Gordon and Dahl also found that the differences that did exist couldn't easily be tied to individual characteristics like gender, experience working in Washington, etc. A panel discussant, Monika Piazzesi, did some further statistical analysis to show that the surveyed economists didn't clump up into "liberal" and "conservative" clusters. 
Conclusion: Economics, at least at the elite level, isn't divided into two warring ideological camps.
That doesn't mean there is no politicization. Justin Wolfers ... ranked the 41 top economists on a liberal/conservative scale according to his own intuition, and found that the economists he intuitively felt were liberal were more likely to support fiscal stimulus, and the conservatives less. He found a few other seemingly partisan differences this way, though not many. (Of course, one has to be careful with this type of analysis; if your ideas of who's "liberal" and who's "conservative" are formed by who supports stimulus and who opposes it, then of course you're going to see this type of effect!)
And of course, it's worth noting that the survey had a small sample, and included only "top" economists at major U.S. universities. There might be "long tails" of ideological bias lower down the prestige scale.
Paul Krugman, who was on the panel, suggested that politicization is mostly confined to the macro field. But even on the question of stimulus, most of the surveyed economists (80%) agreed that Obama's 2009 stimulus boosted output and employment (though fewer agreed that this boost was worth the long-term costs). So it seems that the few top economists who a few years ago were loudly saying that stimulus couldn't possibly work - Bob Lucas, Robert Barro, Gene Fama, etc. - were just a very vocal small minority.
These results surprised me. I'm so used to seeing top macroeconomists tangling with each other... And I had often heard that the appeal of certain classes of macro models - for example, RBC - came from their conservative policy implications. 
So maybe I've been wrong all this time! Or maybe there was more politicization of macro back in the 70s and 80s? 
Or maybe there is still politicization, but the economics profession has just shifted decisively to the center-left? After all, as of 2012, the consensus favorite modeling approach among pure macro people seems to be New Keynesian models of the type preferred by Krugman, not RBC-type models of the type supported by Bob Lucas, Robert Barro, and other "new classical" economists back in the 1980s. It could be that nowadays most economists are - as one person on the panel put it - "market-hugging Democrats". (Or it could be that New Keynesian models simply won the war of ideas. Or both.)
I'm not sure, but Gordon & Dahl's paper is definitely making me question my beliefs...

Saturday, December 29, 2012

'Is Academic Macroeconomics Flourishing?'

Simon Wren-Lewis continues the conversation on the state of academic macroeconomics:

Is academic macroeconomics flourishing?, by Simon Wren-Lewis: How do you judge the health of an academic discipline? Is macroeconomics rotten or flourishing? ...[A]cademic macroeconomics appears all over the place, with strong disputes between alternative schools.
Is this because the evidence in macroeconomics is so unclear that it becomes very difficult to judge different theories? I think the inexact nature of economics is a necessary condition for the lack of an academic consensus in macro, but it is not sufficient. (Mark Thoma has a recent post on this.) Consider monetary policy. I would argue that we have made great progress in both the analysis and practice of monetary policy over the last forty years. One important reason for that progress is the existence of a group that is often neglected - macroeconomists working in central banks.
Unlike their academic counterparts, the primary goal of these economists is not to innovate, but to examine the evidence and see what ideas work. The framework that most of these economists find most helpful is the New NeoClassical Synthesis, or equivalently New Keynesian theory. As a result, it has become the dominant paradigm in analyzing monetary policy.
That does not mean that every macroeconomist looking at monetary policy has to be a New Keynesian, or that central banks ignore other approaches. It is important that this policy consensus should be continually questioned, and part of a healthy academic discipline is that the received wisdom is challenged. However, it has to be acknowledged that policymakers who look at the evidence day in and day out believe that New Keynesian theory is the most useful framework currently around. I have no problem with academics saying ‘I know this is the consensus, but I think it is wrong’. However to say ‘the jury is still out' on whether prices are sticky is wrong. The relevant jury came to a verdict long ago.
It is obvious that when it comes to using fiscal policy in short term macroeconomic stabilization there can be no equivalent claim to progress or consensus. The policy debates we have today do not seem to have advanced much since when Keynes was alive. From one perspective this contrast is deeply puzzling. The science of fiscal policy is not inherently more complicated. ...
What has been missing with fiscal policy has been the equivalent of central bank economists whose job depends on taking an objective view of the evidence and doing the best they can with the ideas that academic macroeconomics provides. This group does not exist because the need to use fiscal policy for short term macroeconomic stabilization is occasional either in terms of time (when the Zero Lower Bound applies) or space (countries within the Eurozone). As a result, when fiscal policy was required to perform a stabilization role, policymakers had to rely on the academic community for advice, and here macroeconomics clearly failed. Pretty well any outside observer would describe its performance as rotten.
The contrast between monetary and fiscal policy tells us that this failure is not an inevitable result of the paucity of evidence in macroeconomics. I think it has a lot more to do with the influence of ideology, and the importance of what I have called the anti-Keynesian school that is a legacy of the New Classical revolution. The reasons why these influences are particularly strong when it comes to fiscal policy are fairly straightforward.
Two issues remain unclear for me. The first is how extensive this ideological bias is. Is the over dominance of the microfoundations approach related to the fact that different takes on the evidence have an unfortunate Keynesian bias? Second, is the degree of ideological bias in macro generic, or is it in part contingent on the particular historical circumstances of the New Classical revolution? These questions are important in thinking how this bias can be overcome.

When people ask if evidence matters in economics, I often point to the debate over the New Classical model's prediction that only unexpected changes in monetary policy matter for economic activity. These models, with their prediction that expected changes in monetary policy are neutral, cleverly allowed New Classical economists to explain the correlations between money, output, and prices in the data without admitting that systematic policy mattered. Thus, these models supported the ideological convictions of many on the right -- government intervention can make things worse, but not better. (Unexpected policy shocks push the economy away from the optimal outcome, so the key was to minimize unexpected policy shocks. This led to things like the push for transparency so that people would anticipate, as much as possible, actual policy moves.)

At first, the evidence seemed to support these models (e.g. Barro's empirical work), but as the evidence accumulated it eventually became clear that this prediction was wrong. Mishkin provided key evidence against these models through his academic work (see, for example, his book A Rational Expectations Approach to Macroeconometrics: Testing Policy Ineffectiveness and Efficient-Markets Models), so I am not as convinced as Simon Wren-Lewis that the difference between monetary and fiscal policy is due solely to the existence of technocratic, mostly non-ideological central bank economists letting the evidence take them where it may. That certainly mattered, but is seems there was more to it than this.

The evidence that Mishkin and others provided was a key reason these models were rejected (it was also difficult to simultaneously explain the magnitude and duration of business cycles with unexpected monetary shocks as the sole driving force), but when it comes to fiscal policy, as noted above, evidence has not trumped ideology to the same degree. One of the reasons for this, I think, is that it's difficult to find clear fiscal policy experiments in the data to evaluate. And when we do (e.g. wars), it's difficult to know if the results will hold at other times. But I can't really disagree with the hypothesis that if an institution like the Fed existed for fiscal policy, there would be a much bigger demand for this information, and that demand would have produced a much larger supply of evidence.

But I am not so sure the difference is "central bank economists whose job depends on taking an objective view of the evidence" so much as it is that these institutions produce a demand for this type of research, and academics respond by supplying the information that central banks need. So the question for me is whether it's the lack of ideology of central bank economists (many of whom are academics), or the fact that their existence creates a large demand for this type of information. Maybe it's both.

Friday, December 28, 2012

Taylor Rules and NGDP Targets: Skeptical Davids

One of the big, current, passionate debates within monetary policy is the relative effectiveness of Taylor Rules versus nominal GDP targeting (e.g. see here). Which of the two does a better job of stabilizing the economy?

If you want to argue against nominal GDP targeting, David Altig of the Atlanta Fed has some ammunition for you. Here's his conclusion:

Nominal GDP Targeting: Still a Skeptic, macroblog: ... To summarize my concerns, the Achilles' heel of nominal GDP targeting is that it provides a poor nominal anchor in an environment in which there is great uncertainty about the path of potential real GDP. As I noted in my earlier post, there is historical justification for that concern.
Basically, anyone puzzling through how demographics are affecting labor force participation rates, how technology is changing the dynamics of job creation, or how policy might be altering labor supply should feel some humility about where potential GDP is headed. For me, a lack of confidence in the path of real GDP takes a lot of luster out of the idea of a nominal GDP target.

Taylor rule skeptics can turn to David Andolfatto of the St. Louis Fed:

On the perils of Taylor rules. macromania: In the Seven Faces of "The Peril" (2010), St. Louis Fed president Jim Bullard speculated on the prospect of the U.S. falling into a Japanese-style deflationary outcome. His analysis was built on an insight of Benhabib, Schmitt-Grohe, and Uribe (2001) in The Perils of Taylor Rules.

These authors (BSU) showed that if monetary policy is conducted according to a Taylor rule, and if there is a zero lower bound (ZLB) on the nominal interest rate, then there are generally two steady-state equilibria. In one equilibrium--the "intended" outcome--the nominal interest rate and inflation rate are on target. In the other equilibrium--the "unintended" outcome--the nominal interest rate and inflation rate are below target--the economy is in a "liquidity trap."

As BSU stress, the multiplicity of outcomes occurs even in economies where prices are perfectly flexible. All that is required are three (non-controversial) ingredients: [1] a Fisher equation; [2] a Taylor rule; and [3] a ZLB.

Back in 2010, I didn't take this argument very seriously. In part it was because the so-called "unintended" outcome was more efficient than than the "intended" outcome (at least, in the version of the model with flexible prices). To put things another way, the Friedman rule turns out to be good policy in a wide class of models. I figured that other factors were probably more important for explaining the events unfolding at that time.

Well, maybe I was a bit too hasty. Let me share with you my tinkering with a simple OLG model... Unfortunately, what follows is a bit on the wonkish side...

[My comments on this topic are highlighted in the first link, i.e. the one to David Altig's post at macroblog.]

Thursday, December 27, 2012

Will Macroeconomists Ever Agree?

Kevin Drum wonders if macroeconomists will ever be able to agree:

The part I can't figure out is why there's so much contention even within the field. In physics and climate science, the cranks are almost all nonspecialists with an axe to grind. Actual practitioners agree pretty broadly on at least the basics. But in macroeconomics you don't have that. There are still polar disagreements among top names on some of the most basic questions. Even given the complexity of the field, that's a bit of a mystery. It's understandable that economics is a more politicized field than physics, but in practice it seems to be almost 100 percent politicized, with the battles fought out by streams of Greek letters demonstrating, as Matt says, just about anything. I wonder if this is ever likely to change? Or will changes in the real world always outpace our ability to build consensus on how the economy actually works?

I took a shot at answering this in April 2011:

... Why can’t economists tell us what happens when government spending goes up or down, taxes change, or the Fed changes monetary policy? The stumbling block is that economics is fundamentally a non-experimental science, particularly in the realm of macroeconomics. Unlike disciplines such as physics, we can't go into the laboratory and rerun the economy again and again under different conditions to measure, say, the average effect of monetary and fiscal policy. We only have one realization of the macroeconomy to use to answer important policy questions, and that limits the precision of the answers we can give. In addition, because the data are historical rather than experimental, we cannot look at the relationships among a set of variables in isolation while holding all the other variables constant as you might do in a lab and this also reduces the precision of our estimates.
Because we only have a single realization of history rather than laboratory data to investigate economic issues, macroeconomic theorists have full knowledge of past data as they build their models. It would be a waste of time to build a model that doesn't fit this one realization of the macroeconomy, and fit it well, and that is precisely what has been done. Unfortunately, there are two models that fit the data, and the two models have vastly different implications for monetary and fiscal policy. ... [This leads to passionate debates about which model is best.]
But even if we had perfect models and perfect data, there would still be uncertainties and disagreements over the proper course of policy. Economists are hindered by the fact that people and institutions change over time in a way that the laws of physics do not. Thus, even if we had the ability to do controlled and careful experiments, there is no guarantee that what we learn would remain valid in the future.
Suppose that we somehow overcome every one of these problems. Even then, disagreements about economic policy would persist in the political arena. Even with full knowledge about how, say, a change in government spending financed by a tax increase will affect the economy now and in the future, ideological differences across individuals will lead to different views on the net social value of these policies. Those on the left tend to value the benefits higher, and place less weight on the costs than those on the right and this leads to fundamental, insoluble differences over the course of economic policy. ...
Progress in economics may someday narrow the partisan divide over economic policy, but even perfect knowledge about the economy won’t eliminate the ideological differences that are the source of so much passion in our political discourse.

A follow-up post in February empahsizes the point that it is not at all clear that the strong divides in economics can be settled with data, but it's not completely hopeless:

...the ability to choose one model over the other is not quite as hopeless as I’ve implied. New data and recent events like the Great Recession push these models into unchartered territory and provide a way to assess which model provides better predictions. However, because of our reliance on historical data this is a slow process – we have to wait for data to accumulate – and there’s no guarantee that once we are finally able to pit one model against the other we will be able to crown a winner. Both models could fail...

I think the Great recession has, for example, provided evidence that the NK model provides a better explanation of events than its competitors, but it is far from a satisfactory construction and it would be hard to call its forecasting and explanatory abilities a success.

Here's another post from the past (Sept. 2009) on this topic:

... There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available (as to how far this kind of "eclecticism" will get you in academia, I'll just note that this is exactly the advice Mishkin gives in his textbook on monetary theory and policy).
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price rigidities of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the thorough analysis that is needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could, and the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

Part of the disagreement is over the ability of this approach -- using an older model guided by newer insights (e.g. that expectations of future output matter for the "IS curve") -- to deliver reliable answers and policy prescriptions.

More on this from another past post (March 2009):

Models are built to answer questions, and the models economists have been using do, in fact, help us find answers to some important questions. But the models were not very good (at all) at answering the questions that are important right now. They have been largely stripped of their usefulness for actual policy in a world where markets simply break down.
The reason is that in order to get to mathematical forms that can be solved, the models had to be simplified. And when they are simplified, something must be sacrificed. So what do you sacrifice? Hopefully, it is the ability to answer questions that are the least important, so the modeling choices that are made reveal what the modelers though was most and least important.
The models we built were very useful for asking whether the federal funds rate should go up or down a quarter point when the economy was hovering in the neighborhood of full employment ,or when we found ourselves in mild, "normal" recessions. The models could tell us what type of monetary policy rule is best for stabilizing the economy. But the models had almost nothing to say about a world where markets melt down, where prices depart from fundamentals, or when markets are incomplete. When this crisis hit, I looked into our tool bag of models and policy recommendations and came up empty for the most part. It was disappointing. There was really no choice but to go back to older Keynesian style models for insight.
The reason the Keynesian model is finding new life is that it specifically built to answer the questions that are important at the moment. The theorists who built modern macro models, those largely in control of where the profession has spent its effort in recent decades,; did not even envision that this could happen, let alone build it into their models. Markets work, they don't break down, so why waste time thinking about those possibilities.
So it's not the math, the modeling choices that were made and the inevitable sacrifices to reality that entails reflected the importance those making the choices gave to various questions. We weren't forced to this end by the mathematics, we asked the wrong questions and built the wrong models.
New Keynesians have been trying to answer: Can we, using equilibrium models with rational agents and complete markets, add frictions to the model - e.g. sluggish wage and price adjustment - you'll see this called "Calvo pricing" - in a way that allows us to approximate the actual movements in key macroeconomic variables of the last 40 or 50 years.
Real Business Cycle theorists also use equilibrium models with rational agents and complete markets, and they look at whether supply-side shocks such as shocks to productivity or labor supply can, by themselves, explain movements in the economy. They largely reject demand-side explanations for movements in macro variables.
The fight - and main question in academics - has been about what drives macroeconomic variables in normal times, demand-side shocks (monetary policy, fiscal policy, investment, net exports) or supply-side shocks (productivity, labor supply). And it's been a fairly brutal fight at times - you've seen some of that come out during the current policy debate. That debate within the profession has dictated the research agenda.
What happens in non-normal times, i.e. when markets break down, or when markets are not complete, agents are not rational, etc., was far down the agenda of important questions, partly because those in control of the journals, those who largely dictated the direction of research, did not think those questions were very important (some don't even believe that policy can help the economy, so why put effort into studying it?).
I think that the current crisis has dealt a bigger blow to macroeconomic theory and modeling than many of us realize.

Here's yet another past post (August 2009) on the general topic of the usefulness of macroeconomic models, though I'm not quite as bullish on the ability of existing models to provide guidance as I was when I wrote this. The point is that although many people use forecasting ability as a metric to measure the usefulness of models (because where the economy is headed is the most improtant question to them), that's not the only use of these models:

Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. ...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur. 
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place. ...
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.

Thursday, December 20, 2012

'Missing the Point in the Economists' Debate'

More on the macro wars. This is from Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State, by Roman Frydman & Michael D. Goldberg:
... To be sure, the upswing in house prices in many markets around the country in the 2000s did reach levels that history and the subsequent long downswings tell us were excessive. But, as we show in Part II, such excessive fluctuations should not be interpreted to mean that asset-price swings are unrelated to fundamental factors. In fact, even if an individual is interested only in short-term returns—a feature of much trading in many markets—the use of data on fundamental factors to forecast these returns is extremely valuable. And the evidence that news concerning a wide array of fundamentals plays a key role in driving asset-price swings is overwhelming.[16]
Missing the Point in the Economists’ Debate
Economists concluded that fundamentals do not matter for asset-price movements because they could not find one overarching relationship that could account for long swings in asset prices. The constraint that economists should consider only fully predetermined accounts of outcomes has led many to presume that some or all participants are irrational, in the sense that they ignore fundamentals altogether. Their decisions are thought to be driven purely by psychological considerations.
The belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.
For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.
There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors,[11] differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.[18]
Behavioral economics provides a case in point. After uncovering massive evidence that the contemporary economics’ standard of rationality fails to capture adequately how individuals actually make decisions, the only sensible conclusion to draw was that this standard was utterly wrong. Instead, behavioral economists, applying a variant of Brecht’s dictum, concluded that individuals are irrational.[19]
To justify that conclusion, behavioral economists and nonacademic commentators argued that the standard of rationality based on the Rational Expectations Hypothesis works—but only for truly intelligent investors. Most individuals lack the abilities needed to understand the future and correctly compute the consequences of their decisions.[20]
In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever (for further discussion, see Chapters 3 and 4). Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.
The Distorted Language of Economic Discourse
It is often remarked that the problem with economics is its reliance on mathematical apparatus. But our criticism is not focused on economists’ use of mathematics. Instead, we criticize contemporary portrayal of the market economy as a mechanical system. Its scientific pretense and the claim that its conclusions follow as a matter of straightforward logic have made informed public discussion of various policy options almost impossible.
Doubters have often been made to seem as unreasonable as those who deny the theory of evolution or that the earth is round. Indeed, public debate is further distorted by the fact that economists formalize notions like “rationality” or “rational markets” in ways that have little or no connection to how non-economists understand these terms. When economists invoke rationality to present or legitimize their public-policy recommendations, non-economists interpret such statements as implying reasonable behavior by real people. In fact, as we discuss extensively in this book, economists’ formalization of rationality portrays obviously irrational behavior in the context of real-world markets.
Such inversions of meaning have had a profound impact on the development of economics itself. For example, having embraced the fully predetermined notion of rationality, behavioral economists proceeded to search for reasons, mostly in psychological research and brain studies, to explain why individual behavior is so grossly inconsistent with that notion—a notion that had no connection with reasonable real-world behavior in the first place.
Moreover, as we shall see, the idea that economists can provide an overarching account of markets, which has given rise to fully predetermined rationality, misses what markets really do. ...
Footnotes
16 See Chapters 7-9 for an extensive discussion of the role of fundamentals in driving price swings in asset markets and their interactions with psychological factors.
17 For example, in discussing the importance of the connection between the financial system and the wider economy for understanding the crisis and thinking about reform, Krugman endorses the approach taken by Bernanke and Gertler. (For an overview of these models, see Bernanke et al., 1999.) However, as pioneering as these models are in incorporating the financial sector into macroeconomics, they are fully predetermined and based on the Rational Expectations Hypothesis. As such, they suffer from the same fundamental flaws that plague other contemporary models. When used to analyze policy options, these models presume not only that the effects of contemplated policies can be fully pre-specified by a policymaker, but also that nothing else genuinely new will ever happen. Supposedly, market participants respond to policy changes according to the REH-based forecasting rules. See footnote 3 in the Introduction and Chapter 2 for further discussion.
18 The convergence in contemporary macroeconomics has become so striking that by now the leading advocates of both the “freshwater” New Classical approach and the “saltwater” New Keynesian approach, regardless of their other differences, extol the virtues of using the Rational Expectations Hypothesis in constructing contemporary models. See Prescott (2006) and Blanchard (2009). It is also widely believed that reliance on the Rational Expectations Hypothesis makes New Keynesian models particularly useful for policy analysis by central banks. See footnote 7 in this chapter and Sims (2010). For further discussion, see Frydman and Goldberg (2008).
19 Following the East German government’s brutal repression of a worker uprising in 1953, Bertolt Brecht famously remarked, “Wouldn’t it be easier to dissolve the people and elect another in their place?”
20 Even Simon (1971), a forceful early critic of economists’ notion of rationality, regarded it as an appropriate standard of decision-making, though he believed that it was unattainable for most people for various cognitive and other reasons. To underscore this view, he coined the term “bounded rationality” to refer to departures from the supposedly normative benchmark.

The introduction to this book might also be of interest:

Rethinking Expectations: The Way Forward for Macroeconomics, Edited by Roman Frydman & Edmund S. Phelps [with entries by Philippe Aghion, Sheila Dow, George W. Evans, Roger E. A. Farmer, Roman Frydman, Michael D. Goldberg, Roger Guesnerie, Seppo Honkapohja, Katarina Juselius, Enisse Kharroubi, Blake LeBaron, Edmund S. Phelps, John B. Taylor, Michael Woodford, and Gylfi Zoega ].

The introduction is here: Which Way Forward for Macroeconomics and Policy Analysis?.

Tuesday, December 18, 2012

Is Macro Rotten?

Paul Krugman, quoted below, started this off (or perhaps better, continued an older discussion) by claiming the state of macro is rotten. Steve Williamson, also quoted below, replied and this is Simon Wren-Lewis' reply to Williamson (remember that, as Simon Wren-Lewis notes below, he has defended the modern approach to macro).

This pretty well covers my views, and I think this part of the Wren-Lewis rebuttal gets at the heart of the issue: "You would not think of suggesting that Paul Krugman is out of touch unless you are in effect dismissing or marginalizing this whole line of research." I am also very much in agreement with the "two unhelpful biases" he notes in the last paragraph, and have been thinking of writing more about the first, "too much of an obsession with microfoundation purity, and too little interest in evidence," particularly the lack of interest in using empirical evidence to test and reject models. (Though there are ways to get around this problem, it may be that such tests have fallen out of favor in macro since we only have historical data to work with, and it's folly to build a model with knowledge of the data and then test to see if the model fits. Of course it will fit, or at least it should. That would explain why there appears to be a greater reliance upon logic, intuition, and consistency with micro foundations than in the past. It seems like today models are more likely to be rejected for lack of internal theoretical consistency than for lack of consistency with the empirical evidence):

The New Classical Revolution: Technical or Ideological?, by Simon Wren-Lewis:
Paul Krugman: The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged
The cult here is freshwater macro, which descends from the New Classical revolution. In response
Steve Williamson: “At the time, this revolution was widely-misperceived as a fundamentally conservative movement. It was actually a nerd revolution.” “What these people had on their side were mathematics, econometrics, and most of all the power of economic theory. There was nothing weird about what these nerds were doing - they were simply applying received theory to problems in macroeconomics. Why could that be thought of as offensive?”
The New Classical revolution was clearly anti-Keynesian..., but was that simply because Keynesian theory was the dominant paradigm? ...
I certainly think that New Classical economists revolutionized macroeconomic theory, and that the theory is much better for it. Paul Krugman (PK) and I have disagreed on this point before. ...

But this is not where the real disagreement between PK and SW lies. The New Classical revolution became the New Neoclassical Synthesis, with New Keynesian theory essentially taking the ideas of the revolutionaries and adapting Keynesian theory to incorporate them. Once again, I believe this was a progressive change. While there is plenty wrong with New Keynesian theory, and the microfoundations project on which it is based, I would much rather start from there than with the theory I was taught in the 1970s. As SW says “Most of us now speak the same language, and communication is good.” ...
I think the difficulty that PK and I share is with those who in effect rejected or ignored the New Neoclassical Synthesis. I can think of no reason why the New Classical economist as ‘revolutionary nerd’ should do this, which suggests that SW’s characterization is only half true. Everyone can have their opinion about particular ideas or developments, but it is not normal to largely ignore what one half of the profession is doing. Yet that seems to be what has happened in significant parts of academia.
SW likes to dismiss PK as being out of touch with current macro research. Lets look at the evidence. PK was very much at the forefront of analyzing the Zero Lower Bound problem, before that problem hit most of the world. While many point to Mike Woodford’s Jackson Hole paper as being the intellectual inspiration behind recent changes at the Fed, the technical analysis can be found in Eggertsson and Woodford, 2003. That paper’s introduction first mentions Keynes, and then Krugman’s 1998 paper on Japan. Subsequently we have Eggertsson and Krugman (2010), which is part of a flourishing research program that adds ‘financial frictions’ into the New Keynesian model. You would not think of suggesting that PK is out of touch unless you are in effect dismissing or marginalizing this whole line of research.[2]
I would not describe the state of macro as rotten, because that appears to dismiss what most mainstream macroeconomists are doing. I would however describe it as suffering from two unhelpful biases. The first is methodological: too much of an obsession with microfoundation purity, and too little interest in evidence. The second is ideological: a legacy of the New Classical revolution that refuses to acknowledge the centrality of Keynesian insights to macroeconomics. These biases are a serious problem, partly because they can distort research effort, but also because they encourage policy makers to make major mistakes.[3]
Footnotes
[1] The clash between Monetarism and Keynesianism was mostly a clash about policy: Friedman used the Keynesian theoretical framework, and indeed contributed greatly to it.

[2] It may be legitimate to suggest someone is out of touch with macro theory if they make statements that are just inconsistent with mainstream theory, without acknowledging this to be the case. The example that most obviously comes to mind is statements like these, about the impact of fiscal policy.

[3] In the case of the UK, a charitable explanation for the Conservative opposition to countercyclical fiscal policy and their embrace of austerity was that they believed conventional monetary policy could always stabilize the economy. If they had taken on board PK’s analysis of Japan, or Eggertsson and Woodford, they would not have made that mistake.
Update: Noah Smith also comments.

Sunday, December 16, 2012

'Mistaking Models for Reality'

Simon Wren-Lewis takes issue with Stephen Williamson's claim that "there are good reasons to think that the welfare losses from wage/price rigidity are small":

Mistaking models for reality, by Simon Wren-Lewis: In a recent post, Paul Krugman used a well known Tobin quote: it takes a lot of Harberger triangles to fill an Okun gap. For non-economists, this means that the social welfare costs of resource misallocations because prices are ‘wrong’ (because of monopoly, taxation etc) are small compared to the costs of recessions. Stephen Williamson takes issue with this idea. His argument can be roughly summarized as follows:

1) Keynesian recessions arise because prices are sticky, and therefore 'wrong', so their costs are not fundamentally different from resource misallocation costs.

2) Models of price stickiness exaggerate these costs, because their microfoundations are dubious.

3) If the welfare costs of price stickiness were significant, why are they not arbitraged away?

I’ve heard these arguments, or variations on them, many times before.[1] So lets see why they are mistaken...

But I want to focus on this. How useful are representative agent models, e.g. New Keynesian models, for examining questions such as the costs of unemployment?:

Lets move from wage and price stickiness to the major cost of recessions: unemployment. The way that this is modeled in most New Keynesian set-ups based on representative agents is that workers cannot supply as many hours as they want. In that case, workers suffer the cost of lower incomes, but at least they get the benefit of more leisure. Here is a triangle maybe (see Nick Rowe again.) Now this grossly underestimates the cost of recessions. One reason is  heterogeneity: many workers carry on working the same number of hours in a recession, but some become unemployed. Standard consumer theory tells us this generates larger aggregate costs, and with more complicated models this can be quantified. However the more important reason, which follows from heterogeneity, is that the long term unemployed typically do not think that at least they have more leisure time, so they are not so badly off. Instead they feel rejected, inadequate, despairing, and it scars them for life. Now that may not be in the microfounded models, but that does not make these feelings disappear, and certainly does not mean they should be ignored.

It is for this reason that I have always had mixed feelings about representative agent models that measure the costs of recessions and inflation in terms of the agent’s utility.[2] In terms of modeling it has allowed business cycle costs to be measured using the same metric as the costs of distortionary taxation and under/over provision of public goods, which has been great for examining issues involving fiscal policy, for example. Much of my own research over the last decade has used this device. But it does ignore the more important reasons why we should care about recessions. Which is perhaps OK, as long as we remember this. The moment we actually think we are capturing the costs of recessions using our models in this way, we once again confuse models with reality.

What does me mean by confusing models with reality?:

The problem with modeling price rigidity is that there are too many plausible reasons for this rigidity - too many microfoundations. (Alan Blinder’s work is a classic reference here.) Microfounded models typically choose one for tractability. It is generally possible to pick holes in any particular tractable story behind price rigidity (like Calvo contracts). But it does not follow that these models of Keynesian business cycles exaggerate the size of recessions. It seems much more plausible to argue completely the opposite: because microfounded models typically only look at one source of nominal rigidity, they underestimate its extent and costs.

I could make the same point in a slightly different way. Lets suppose that we do not fully understand what causes recessions. What we do understand, in the simple models we use, accounts for small recessions, but not large ones. Therefore, large recessions cannot exist. The logic is obviously faulty, but too many economists argue this way. There appears to be a danger in only ‘modeling what we understand’ that modelers can go on to confuse models with reality.

Let me add that while this is a good argument for why the measured costs only establish a minimum bound for the total costs, I am not sure we can be confident they do that. The reason is that I am not convinced that wage and price rigidities as modeled in the New Keynesian framework adequately capture the transmission mechanism from shocks to real effects that propelled us into the Great Recession. That is, do we really think that wage and price rigidities of the Calvo variety (or of the Rotemberg variety) are the main friction behind the downturn and struggle to recover? If prices were perfectly flexible, would our problems be over? Would they have never begun in the first place? More flexibility in housing prices might help, but the problem was a breakdown in financial intermediation which in turn caused problems for the real sector. Capturing these effects requires abandoning the representative agent framework, connecting the real and financial sectors, and then endogenizing financial cycles. There is progress on this front, but in my view existing models are simply unable to adequately capture these effects.

If this is true, if existing models do not adequately capture the transmission of financial shocks to changes in output and employment, if our models miss a fundamental mechanism at work in the recession, why should we believe estimates of fiscal multipliers, welfare effects, and so on based upon models that assume shocks are transmitted through moderate price rigidities? I think these models are good at capturing mild business cycles like we experienced during the Great Moderation, but I question their value in large, persistent, recessions induced by large financial shocks.

[For more on macro models, see Paul Krugman's The Dismal State of the Dismal Science and the links he provides in his discussion.]

Monday, November 26, 2012

Lucas Interview

Stephen Williamson notes an interview of Robert Lucas:

SED Newsletter: Lucas Interview: The November 2012 SED Newsletter has ... an interview with Robert Lucas, which is a gem. Some excerpts:

... Microfoundations:

ED: If the economy is currently in an unusual state, do micro-foundations still have a role to play?
RL: "Micro-foundations"? We know we can write down internally consistent equilibrium models where people have risk aversion parameters of 200 or where a 20% decrease in the monetary base results in a 20% decline in all prices and has no other effects. The "foundations" of these models don't guarantee empirical success or policy usefulness.
What is important---and this is straight out of Kydland and Prescott---is that if a model is formulated so that its parameters are economically-interpretable they will have implications for many different data sets. An aggregate theory of consumption and income movements over time should be consistent with cross-section and panel evidence (Friedman and Modigliani). An estimate of risk aversion should fit the wide variety of situations involving uncertainty that we can observe (Mehra and Prescott). Estimates of labor supply should be consistent aggregate employment movements over time as well as cross-section, panel, and lifecycle evidence (Rogerson). This kind of cross-validation (or invalidation!) is only possible with models that have clear underlying economics: micro-foundations, if you like.

This is bread-and-butter stuff in the hard sciences. You try to estimate a given parameter in as many ways as you can, consistent with the same theory. If you can reduce a 3 orders of magnitude discrepancy to 1 order of magnitude you are making progress. Real science is hard work and you take what you can get.

"Unusual state"? Is that what we call it when our favorite models don't deliver what we had hoped? I would call that our usual state.

Friday, November 23, 2012

'Imagine Economists had Warned of a Financial Crisis'

Chris Dillow:

... Imagine economists had widely and credibly warned of a financial crisis in the mid-00s. People would have responded to such warnings by lending less and borrowing less (I'm ignoring agency problems here). But this would have resulted in less gearing and so no crisis. There would now be a crisis in economics as everyone wondered why the disaster we predicted never happened. ...
His main point, however, revolves around Keynes' statement that "If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid":
I suspect there's another reason why economics is thought to be in crisis. It's because, as Coase says, (some? many?) economists lost sight of ordinary life and people, preferring to be policy advisors, theorists or - worst of all - forecasters.

In doing this, many stopped even trying to pursue Keynes' goal. What sort of reputation would dentists have if they stopped dealing with people's teeth and preferred to give the government advice on dental policy, tried to forecast the prevalence of tooth decay or called for new ways of conceptualizing mouths?

Perhaps, then, the problem with economists is that they failed to consider what function the profession can reasonably serve.

Tuesday, November 06, 2012

'Four Top Economists Huddled Round a Lunch Table'

Aditya Chakrabortty:

...As one of the 10 most expensive private colleges in the US, Carnegie Mellon in Pittsburgh almost oppresses visitors with neo-gothic grandness... I was a guest of Carol Goldburg, the director of CMU's undergraduate economics program, who had gathered a few colleagues to give their take on the presidential election. Here were four top economists huddled round a lunch table: they were surely going to regale me with talk of labor-market policy, global imbalances, marginal tax rates.
My opener was an easy ball: how did they think President Obama had done? Sevin Yeltekin, an expert on political economy, was the first to respond: "He hasn't delivered on a lot of his promises, but he inherited a big mess. I'd give him a solid B."
I threw the same question to her neighbor and one of America's most renowned rightwing economists, Allan Meltzer. He snapped: "A straight F: he took a mess and made it even bigger." Then came Goldburg, now wearing the look of a hostess whose guests are falling out: "Well, I'm concerned about welfare and poverty, and Obama's tried hard on those issues." A tentative pause. "B-minus?"
Finally it was the turn of Bennett McCallum, author of such refined works as Multiple-Solution Indeterminacies in Monetary Policy Analysis. Surely he would bring the much-needed technical ballast? Um, no. "D: he's trying to turn this country into France."
Some of these comments were surely made for the benefit of their audience: faced with a mere scribbler, the scholars had evidently decided to hold the algebra, and instead talk human. Even so, this was a remarkable row. Here were four economists on the same faculty, who probably taught some of the same students; yet Obama's reputation depended on entirely on who was doing the assessment. The president was either B or F, good or a failure: opposite poles with no middle ground, and not even a joint agreement of the judging criteria. ...

Monday, November 05, 2012

Maurizio Bovi: Are You a Good Econometrician? No, I am British (With a Response from George Evans)

Via email, Maurizio Bovi describes a paper of his on adaptive learning (M. Bovi (2012). "Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?" Journal of Economic Dynamics and Control). A colleague of mine, George Evans -- a leader in this area -- responds:

Are you a good econometrician? No, I am British!, by Maurizio Bovi*: A typical assumption of mainstream strands of research is that agents’ expectations are grounded in efficient econometric models. Muthian agents are all equally rational and know the true model. The adaptive learning literature assumes that agents are boundedly rational in the sense that they are as smart as econometricians and that they are able to learn the correct model. The predictor choice approach argues that individuals are boundedly rational in the sense that agents switch to the forecasting rule that has the highest fitness. Preferences could generate enduring inertia in the dynamic switching process and a stationary environment for a sufficiently long period is necessary to learn the correct model. Having said this, all the cited approaches typically argue that there is a general tendency to forecast via optimal forecasting models because of the costs stemming from inefficient predictions.
To the extent that the representative agent’s beliefs i) are based on efficient (in terms of minimum MSE=mean squared forecasting errors) econometric models, and ii) can be captured by ad hoc surveys, two basic facts emerge, stimulating my curiosity. First, in economic systems where the same simple model turns out to be the best predictor for a sufficient span of time survey expectations should tend to converge: more and more individuals should learn or select it. Second, the forecasting fitness of this enduring minimum MSE econometric model should not be further enhanced by the use of information provided by survey expectations. If agents act as if they were statisticians in the sense that they use efficient forecasting rules, then survey-based beliefs must reflect this and cannot contain any statistically significant information that helps reduce the MSE relative to the best econometric predictor. In sum, there could be some value in analyzing hard data  and survey beliefs to understand i) whether these latter derive from optimal econometric models and ii) the time connections between survey-declared and efficient model-grounded expectations. By examining real-time GDP dynamics in the UK I have found that, over a time-span of two decades, the adaptive expectations (AE) model systematically outperforms other standard predictors which, as argued by the above recalled literature, should be in the tool-box of representative econometricians (Random Walk, ARIMA, VAR). As mentioned, this peculiar environment should eventually lead to increased homogeneity in best-model based expectations. However data collected in the surveys managed by the Business Surveys Unit of the European Commission (European Commission, 2007) highlight that great variety in expectations persists. Figure 1 shows that in the UK the number of optimists and pessimists tend to be rather similar at least since the inception of data1 availability (1985).

Bovi

In addition, evidence points to one-way information flows going from survey data to econometric models. In particular, Granger-causality, variance decomposition and Geweke’s instantaneous feedback tests suggest that the accuracy of the AE forecasting model can be further enhanced by the use of the information provided by the level of disagreement across survey beliefs. That is, as per GDP dynamics in the UK, the expectation feedback system looks like an open loop where possibly non-econometrically based beliefs play a key role with respect to realizations. All this affects the general validity of the widespread assumption that representative agents’ beliefs derive from optimal econometric models.
Results are robust to several methods of quantifications of qualitative survey observations as well as to standard forecasting rules estimated both recursively and via optimal-size rolling windows. They are also in line both with the literature supporting the non-econometrically-based content of the information captured by surveys carried out on laypeople and, interpreting MSE as a measure of volatility, with the stylized fact on the positive correlation between dispersion in beliefs and macroeconomic uncertainty.
All in all, our evidence raises some intriguing questions: Why do representative UK citizens seem to be systematically more boundedly rational than what is usually hypothesized in the adaptive learning literature and the predictor choice approach? What does it persistently hamper them to use the most accurate statistical model? Are there econometric (objective) or psychological (subjective) impediments?
____________________
*Italian National Institute of Statistics (ISTAT), Department of Forecasting and Economic Analysis. The opinions expressed herein are those of the author (E-mail mbovi@istat.it) and do not necessarily reflect the views of ISTAT.
[1] The question is “How do you expect the general economic situation in the country to develop over the next 12 months?” Respondents may reply “it will…: i) get a lot better, ii) get a little better, iii) stay the same, iv) get a little worse, v) get a lot worse, vi) I do not know. See European Commission (1997).
References
European Commission (2007). The Joint Harmonised EU Programme of Business and Consumer Surveys, User Guide, European Commission, Directorate-General for Economic and Financial Affairs, July.
M. Bovi (2012). “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?” Journal of Economic Dynamics and Control DOI: 10.1016/j.jedc.2012.10.005.

Here's the response from George Evans:

Comments on Maurizio Bovi, “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?”, by George Evans, University of Oregon: This is an interesting paper that has a lot of common ground with the adaptive learning literature. The techniques and a number of the arguments will be familiar to those of us who work in adaptive learning. The tenets of the adaptive learning approach can be summarized as follows: (1) Fully “rational expectations” (RE) are implausibly strong and implicitly ignores a coordination issue that arises because economic outcomes are affected by the expectations of firms and households (economic “agents”). (2) A more plausible view is that agents have bounded rationality with a degree of rationality comparable to economists themselves (the “cognitive consistency principle”). For example agents’ expectations might be based on statistical models that are revised and updated over time. On this approach we avoid assuming that agents are smarter than economists, but we also recognize that agents will not go on forever making systematic errors. (3) We should recognize that economic agents, like economists, do not agree on a single forecasting model. The economy is complex. Therefore, agents are likely to use misspecified models and to have heterogeneous expectations.
The focus of the adaptive learning literature has changed over time. The early focus was on whether agents using statistical learning rules would or would not eventually converge to RE, while the main emphasis now is on the ways in which adaptive learning can generate new dynamics, e.g. through discounting of older data and/or switching between forecasting models over time. I use the term “adaptive learning” broadly, to include, for example, the dynamic predictor selection literature.
Bovi’s paper “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models” argues that with respect to GDP growth in the UK the answer to his question is no because 1) there is a single efficient econometric model, which is a version of AE (adaptive expectations), and 2) agents might be expected therefore to have learned to adopt this optimal forecasting model over time. However the degree of heterogeneity of expectations has not fallen over time, and thus agents are failing to learn to use the best forecasting model.
From the adaptive learning perspective, Bovi’s first result is intriguing, and merits further investigation, but his approach will look very familiar to those of us who work in adaptive learning. And the second point will surprise few of us: the extent of heterogeneous expectations is well-known, as is the fact that expectations remain persistently heterogeneous, and there is considerable work within adaptive learning that models this heterogeneity.
More specifically:
1) Bovi’s “efficient” model uses AE with the adaptive expectations parameter gamma updated over time in a way that aims to minimize the squared forecast error. This is in fact a simple adaptive learning model, which was proposed and studied in Evans and Ramey, “Adaptive expectations, underparameterization and the Lucas critique”, Journal of Monetary Economics (2006). We there suggested that agents might want to use AE as an optimal choice for a parsimonious (underparameterized) forecasting rule, showed what would determine the optimal choice of gamma, and provided an adaptive learning algorithm that would allow agents to update their choice of gamma over time in order to track unknown structural change. (Our adaptive learning rule exploits the fact that AE can be viewed as the forecast that arises from an IMA(1,1) time-series model, and in our rule the MA parameter is estimated and updated recursively using a constant gain rule.)
2) At the same time I am suspicious that economists will agree that there is a single best way to forecast GDP growth. For the US there is a lot of work by numerous researchers that strongly indicates that choosing between univariate time-series models is controversial, i.e. there appears to be no single clearly best univariate forecasting model, and (ii) forecasting models for GDP growth should be multivariate and should include both current & lagged unemployment rates and the consumption to GDP ratio. Other forecasters have found a role for nonlinear (Markov-switching) dynamics. Thus I doubt that there will be agreement by economists on a single best forecasting model for GDP growth or other key macro variables. Hence we should expect households and firms also to entertain multiple forecasting models, and for different agents to use different models.
3) Even if there were a single forecasting model that clearly dominated, one would not expect homogeneity of expectations across agents or for heterogeneity to disappear over time. In Evans and Honkapohja, “Learning as a Rational Foundation for Macroeconomics and Finance”, forthcoming 2013 in R Frydman and E Phelps, Rethinking Expectations: The Way Forward for Macroeconomics, we point out that variations across agents in the extent of discounting and the frequency with which agents update parameter estimates, as well as the inclusion of idiosyncratic exogenous expectation shocks, will give rise to persistent heterogeneity. There are costs to forecasting, and some agents will have larger benefits from more accurate forecasts than other agents. For example, for some agents the forecast method advocated by Bovi will be too costly and an even simpler forecast will be adequate (e.g. a RW forecast that the coming year will be like last year, or a forecast based on mean growth over, say, the last five years).
4) When there are multiple models potentially in play, as there always is, the dynamic predictor selection approach initiated by Brock and Hommes means that because of varying costs of forecast methods, and heterogeneous costs across agents, not all agents will want to use what appears to be the best performing model. We therefore expect heterogeneous expectations at any moment in time. I do not regard this as a violation of the cognitive consistency principle – even economists will find that in some circumstances in their personal decision-making they use more boundedly rational forecast methods than in other situations in which the stakes are high.
In conclusion, here is my two sentence summary for Maurizio Bovi: Your paper will find an interested audience among those of us who work in this area. Welcome to the adaptive learning approach. 
George Evans

Wednesday, October 31, 2012

'The Role of Money in New-Keynesian Models'

Should New Keynesian models include a specific role for money (over and above specifying the interest rate as the policy variable)? This is a highly wonkish, but mostly accessible explanation from Bennett McCallum:

The Role of Money in New-Keynesian Models, by Bennett T. McCallum, Carnegie Mellon University, National Bureau of Economic Research, N° 2012-019 Serie de Documentos de Trabajo Working Paper series Octubre 2012

Here's the bottom line:

...we drew several conclusions supportive of the idea that a central bank that ignores money and banking will seriously misjudge the proper interest rate policy action to stabilize inflation in response to a productivity shock in the production function for output. Unfortunately, some readers discovered an error; we made a mistake in linearization that, when corrected, greatly diminished the magnitude of some of the effects of including the banking sector. There seems now to be some interest in developing improved models of this type. Marvin Goodfriend (MG) is working with a PhD student in this topic. At this point I have not been able to give a convincing argument that one needs to include M. ...
There is one respect in which it is nevertheless the case that a rule for the monetary base is superior to a rule for the interbank interest rate. In this context we are clearly discussing the choice of a controllable instrument variable—not one of the "target rules" favored by Svensson and Woodford, which are more correctly called "targets." Suppose that the central bank desires for its rule to be verifiable by the public. Then it will arguably need to be a non-activist rule, one that normally keeps the instrument setting unchanged over long spans of time. In that case we know that in the context of a standard NK model, an interest rate instrument will not be viable. That is, the rule will not satisfy the Taylor Principle, which is necessary for "determinacy." The latter condition is not, I argue, what is crucial for well-designed monetary policy, but LS learnability is, and it is not present when the TP is not satisfied. This is well known from, e.g., Evans and Honkapohja (2001), Bullard and Mitra (2002), McCallum (2003, 2009). ...

Monday, October 29, 2012

What’s the Use of Economics?

Alan Kirman on how macroeconomics needs to change (I'm still thinking about his idea that the economy should be modeled as "a system which self organizes, experiencing sudden and large changes from time to time"):

What’s the use of economics?, by Alan Kirman, Vox EU: The simple question that was raised during a recent conference organized by Diane Coyle at the Bank of England was to what extent has - or should - the teaching of economics be modified in the light of the current economic crisis? The simple answer is that the economics profession is unlikely to change. Why would economists be willing to give up much of their human capital, painstakingly nurtured for over two centuries? For macroeconomists in particular, the reaction has been to suggest that modifications of existing models to take account of ‘frictions’ or ‘imperfections’ will be enough to account for the current evolution of the world economy. The idea is that once students have understood the basics, they can be introduced to these modifications.

A turning point in economics

However, other economists such as myself feel that we have finally reached the turning point in economics where we have to radically change the way we conceive of and model the economy. The crisis is an opportune occasion to carefully investigate new approaches. Paul Seabright hit the nail on the head; economists tend to inaccurately portray their work as a steady and relentless improvement of their models whereas, actually, economists tend to chase an empirical reality that is changing just as fast as their modeling. I would go further; rather than making steady progress towards explaining economic phenomena professional economists have been locked into a narrow vision of the economy. We constantly make more and more sophisticated models within that vision until, as Bob Solow put it, “the uninitiated peasant is left wondering what planet he or she is on” (Solow 2006).

In this column, I will briefly outline some of the problems the discipline of economics faces; problems that have been shown up in stark relief during the current crisis. Then I will come back to what we should try to teach students of economics.

Entrenched views on theory and reality

The typical attitude of economists is epitomized by Mario Draghi, President of the European Central Bank. Regarding the Eurozone crisis, he said:

“The first thing that came to mind was something that people said many years ago and then stopped saying it: The euro is like a bumblebee. This is a mystery of nature because it shouldn’t fly but instead it does. So the euro was a bumblebee that flew very well for several years. And now – and I think people ask ‘how come?’ – probably there was something in the atmosphere, in the air, that made the bumblebee fly. Now something must have changed in the air, and we know what after the financial crisis. The bumblebee would have to graduate to a real bee. And that’s what it’s doing” (Draghi 2012)

What Draghi is saying is that, according to our economic models, the Eurozone should not have flown. Entomologists (those who study insects) of old with more simple models came to the conclusion that bumble bees should not be able to fly. Their reaction was to later rethink their models in light of irrefutable evidence. Yet, the economist’s instinct is to attempt to modify reality in order to fit a model that has been built on longstanding theory. Unfortunately, that very theory is itself based on shaky foundations.

Economic theory can mislead

Every student in economics is faced with the model of the isolated optimizing individual who makes his choices within the constraints imposed by the market. Somehow, the axioms of rationality imposed on this individual are not very convincing, particularly to first time students. But the student is told that the aim of the exercise is to show that there is an equilibrium, there can be prices that will clear all markets simultaneously. And, furthermore, the student is taught that such an equilibrium has desirable welfare properties. Importantly, the student is told that since the 1970s it has been known that whilst such a system of equilibrium prices may exist, we cannot show that the economy would ever reach an equilibrium nor that such an equilibrium is unique.

The student then moves on to macroeconomics and is told that the aggregate economy or market behaves just like the average individual she has just studied. She is not told that these general models in fact poorly reflect reality. For the macroeconomist, this is a boon since he can now analyze the aggregate allocations in an economy as though they were the result of the rational choices made by one individual. The student may find this even more difficult to swallow when she is aware that peoples’ preferences, choices and forecasts are often influenced by those of the other participants in the economy. Students take a long time to accept the idea that the economy’s choices can be assimilated to those of one individual.

A troubling choice for macroeconomists

Macroeconomists are faced with a stark choice: either move away from the idea that we can pursue our macroeconomic analysis whilst only making assumptions about isolated individuals, ignoring interaction; or avoid all the fundamental problems by assuming that the economy is always in equilibrium, forgetting about how it ever got there.

Exogenous shocks? Or a self-organizing system?

Macroeconomists therefore worry about something that seems, to the uninformed outsider, paradoxical. How does the economy experience fluctuations or cycles whilst remaining in equilibrium? The basic macroeconomic idea is, of course, that the economy is in a steady state and that it is hit from time to time by exogenous shocks. Yet, this is entirely at variance with the idea that economists may be dealing with a system which self organizes, experiencing sudden and large changes from time to time.

There are two reasons as to why the latter explanation is better than the former. First, it is very difficult to find significant events that we can point to in order to explain major turning points in the evolution of economies. Second, the idea that the economy is sailing on an equilibrium path but is from time to time buffeted by unexpected storms just does not pass what Bob Solow has called the ‘smell test’. To quote Willem Buiter (2009),

“Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive”

Some teaching suggestions

New thinking is imperative:

  • We should spend more time insisting on the importance of coordination as the main problem of modern economies rather than efficiency. Our insistence on the latter has diverted attention from the former.
  • We should cease to insist on the idea that the aggregation of the choices and actions of individuals who directly interact with each other can be captured by the idea of the aggregate acting as only one of these many individuals. The gap between micro- and macrobehavior is worrying.
  • We should recognize that some of the characteristics of aggregates are caused by aggregation itself. The continuous reaction of the aggregate may be the result of individuals making simple, binary discontinuous choices. For many phenomena, it is much more realistic to think of individuals as having thresholds - which cause them to react - rather than reacting in a smooth, gradual fashion to changes in their environment. Cournot had this idea, it is a pity that we have lost sight of it. Indeed, the aggregate itself may also have thresholds which cause it to react. When enough individuals make a particular choice, the whole of society may then move. When the number of individuals is smaller, there is no such movement. One has only to think of the results of voting.
  • All students should be obliged to collect their own data about some economic phenomenon at least once in their career. They will then get a feeling for the importance of institutions and of the interaction between agents and its consequences. Perhaps, best of all, this will restore their enthusiasm for economics!

Some use for traditional theory

Does this mean that we should cease to teach ‘standard’ economic theory to our students? Surely not. If we did so, these students would not be able to follow the current economic debates. As Max Planck has said, “Physics is not about discovering the natural laws that govern the universe, it is what physicists do”. For the moment, standard economics is what economists do. But we owe it to our students to point out difficulties with the structure and assumptions of our theory. Although we are still far from a paradigm shift, in the longer run the paradigm will inevitably change. We would all do well to remember that current economic thought will one day be taught as history of economic thought.

References

Buiter, W (2009), “The unfortunate uselessness of most ‘state of the art’ academic monetary economics”, Financial Times online, 3 March.
Coyle, D (2012) “What’s the use of economics? Introduction to the Vox debate”, VoxEu.org, 19 September.
Davies, H (2012), “Economics in Denial”, ProjectSyndicate.org, 22 August.
Solow, R (2006), “Reflections on the Survey” in Colander, D., The Making of an Economist. Princeton, Princeton University Press.

Friday, October 12, 2012

'Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero'

[This one is wonkish. It's (I think) one of the more important papers from the St. Louis Fed conference.]

One thing that doesn't get enough attention in DSGE models, at least in my opinion, is the constraints, implicit assumptions, etc. imposed when the theoretical model is log-linearized. This paper by Tony Braun and Yuichiro Waki helps to fill that void by comparing a theoretical true economy to its log-linearized counterpart, and showing that the results of the two models can be quite different when the economy is at the zero bound. For example, multipliers that are greater than two in the log-linearized version are smaller -- usually near one -- in the true model (thus, fiscal policy remains effective, but may need to be more aggressive than the log-linear model would imply). Other results change as well, and there are sign changes in some cases, leading the authors to conclude that "we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps."

Here's part of the introduction and the conclusion to the paper:

Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero, by Tony Braun and Yuichiro Waki: Abstract Does fiscal policy have qualitatively different effects on the economy in a liquidity trap? We analyze a nonlinear stochastic New Keynesian model and compare the true and log-linearized equilibria. Using the log-linearized equilibrium conditions the answer to the above question is yes. However, for the true nonlinear model the answer is no. For a broad range of empirically relevant parameterizations labor falls in response to a tax cut in the log-linearized economy but rises in the true economy. While the government purchase multiplier is above two in the log-linearized economy it is about one in the true economy.
1 Introduction The recent experiences of Japan, the United States, and Europe with zero/near-zero nominal interest rates have raised new questions about the conduct of monetary and fiscal policy in a liquidity trap. A large and growing body of new research has emerged that provides answers using New Keynesian (NK) frameworks that explicitly model the zero bound on the nominal interest rate. One conclusion that has emerged is that fiscal policy has different effects on the economy when the nominal interest rate is zero. Eggertsson (2011) finds that hours worked fall in response to a labor tax cut when the nominal interest rate is zero, a property that is referred to as the “paradox of toil,” and Christiano, Eichenbaum, and Rebelo (2011), Woodford (2011) and Erceg and Lindé (2010) find that the size of the government purchase multiplier is substantially larger than one when the nominal interest rate is zero.
These and other results ( see e.g. Del Negro, Eggertsson, Ferrero, and Kiyotaki (2010), Bodenstein, Erceg, and Guerrieri (2009), Eggertsson and Krugman (2010)) have been derived in setups that respect the nonlinearity in the Taylor rule but loglinearize the remaining equilibrium conditions about a steady state with a stable price level. Log-linearized NK models require large shocks to generate a binding zero lower bound for the nominal interest rate and the shocks must be even larger if these models are to reproduce the measured declines in output and inflation that occurred during the Great Depression or the Great Recession of 2007-2009.[1] Log-linearizations are local solutions that only work within a given radius of the point where the approximation is taken. Outside of this radius these solutions break down (See e.g. Den Haan and Rendahl (2009)). The objective of this paper is to document that such a breakdown can occur when analyzing the zero bound.
We study the properties of a nonlinear stochastic NK model when the nominal interest rate is constrained at its zero lower bound. Our tractable framework allows us to provide a partial analytic characterization of equilibrium and to numerically compute all equilibria when the zero interest state is persistent. There are no approximations needed when computing equilibria and our numerical solutions are accurate up to the precision of the computer. A comparison with the log-linearized equilibrium identifies a severe breakdown of the log-linearized approximate solution. This breakdown occurs when using parameterizations of the model that reproduce the U.S. Great Depression and the U.S. Great Recession.
Conditions for existence and uniqueness of equilibrium based on the log-linearized equilibrium conditions are incorrect and offer little or no guidance for existence and uniqueness of equilibrium in the true economy. The characterization of equilibrium is also incorrect.
These three unpleasant properties of the log-linearized solution have the implication that relying on it to make inferences about the properties of fiscal policy in a liquidity trap can be highly misleading. Empirically relevant parameterization/shock combinations that yield the paradox of toil in the log-linearized economy produce orthodox responses of hours worked in the true economy. The same parameterization/shock combinations that yield large government purchases multipliers in excess of two in the log-linearized economy, produce government purchase multipliers as low as 1.09 in the nonlinear economy. Indeed, we find that the most plausible parameterizations of the nonlinear model have the property that there is no paradox of toil and that the government purchase multiplier is close to one.
We make these points using a stochastic NK model that is similar to specifications considered in Eggertsson (2011) and Woodford (2011). The Taylor rule respects the zero lower bound of the nominal interest rate, and a preference discount factor shock that follows a two state Markov chain produces a state where the interest rate is zero. We assume Rotemberg (1996) price adjustment costs, instead of Calvo price setting. When log-linearized, this assumption is innocuous - the equilibrium conditions for our model are identical to those in Eggertsson (2011) and Woodford (2011), with a suitable choice of the price adjustment cost parameter. Moreover, the nonlinear economy doesn’t have any endogenous state variables, and the equilibrium conditions for hours and inflation can be reduced to two nonlinear equations in these two variables when the zero bound is binding.[2]
These two nonlinear equations are easy to solve and are the nonlinear analogues of what Eggertsson (2011) and Eggertsson and Krugman (2010) refer to as “aggregate demand” (AD) and “aggregate supply” (AS) schedules. This makes it possible for us to identify and relate the sources of the approximation errors associated with using log-linearizations to the shapes and slopes of these curves, and to also provide graphical intuition for the qualitative differences between the log-linear and nonlinear economies.
Our analysis proceeds in the following way. We first provide a complete characterization of the set of time invariant Markov zero bound equilibria in the log-linearized economy. Then we go on to characterize equilibrium of the nonlinear economy. Finally, we compare the two economies and document the nature and source of the breakdowns associated with using log-linearized equilibrium conditions. An important distinction between the nonlinear and log-linearized economy relates to the resource cost of price adjustment. This cost changes endogenously as inflation changes in the nonlinear model and modeling this cost has significant consequences for the model’s properties in the zero bound state. In the nonlinear model a labor tax cut can increase hours worked and decrease inflation when the interest rate is zero. No equilibrium of the log-linearized model has this property. We show that this and other differences in the properties of the two models is precisely due to the fact that the resource cost of price adjustment is absent from the resource constraint of the log-linearized model.[3] ...
...
5 Concluding remarks In this paper we have documented that it can be very misleading to rely on the log-linearized economy to make inferences about existence of an equilibrium, uniqueness of equilibrium or to characterize the local dynamics of equilibrium. We have illustrated that these problems arise in empirically relevant parameterizations of the model that have been chosen to match observations from the Great Depression and Great Recession.
We have also documented the response of the economy to fiscal shocks in calibrated versions of our nonlinear model. We found that the paradox of toil is not a robust property of the nonlinear model and that it is quantitatively small even when it occurs. Similarly, the evidence presented here suggests that the government purchase GDP multiplier is not much above one in our nonlinear economy.
Although we encountered situations where the log-linearized solution worked reasonably well and the model exhibited the paradox of toil and a government purchase multiplier above one, the magnitude of these effects was quantitatively small. This result was also very tenuous. There is no simple characterization of when the log-linearization works well. Breakdowns can occur in regions of the parameter space that are very close to ones where the log-linear solution works. In fact, it is hard to draw any conclusions about when one can safely rely on log-linearized solutions in this setting without also solving the nonlinear model. For these reasons we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps.
This raises a question. How should one proceed with solution and estimation of medium or large scale NK models with multiple shocks and endogenous state variables when considering episodes with zero nominal interest rates? One way forward is proposed in work by Adjemian and Juillard (2010) and Braun and Körber (2011). These papers solve NK models using extended path algorithms.
We conclude by briefly discussing some extensions of our analysis. In this paper we assumed that the discount factor shock followed a time-homogeneous two state Markov chain with no shock being the absorbing state. In our current work we relax this final assumption and consider general Markov switching stochastic equilibria in which there are repeated swings between episodes with a positive interest rate and zero interest rates. We are also interested in understanding the properties of optimal monetary policy in the nonlinear model. Eggertsson and Woodford (2003), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006), Nakov (2008), and Werning (2011) consider optimal monetary policy problems subject to a non-negativity constraint on the nominal interest rate, using implementability conditions derived from log-linearized equilibrium conditions. The results documented here suggest that the properties of an optimal monetary policy could be different if one uses the nonlinear implementability conditions instead.
[1] Eggertsson (2011) requires a 5.47% annualized shock to the preference discount factor in order to account for the large output and inflation declines that occurred in the Great Depression. Coenen, Orphanides, and Wieland (2004) estimate a NK model to U.S. data from 1980-1999 and find that only very large shocks produce a binding zero nominal interest rate.
[2] Under Calvo price setting, in the nonlinear economy a particular moment of the price distribution is an endogenous state variable and it is no longer possible to compute an exact solution to the equilibrium.
[3] This distinction between the log-linearized and nonlinear resource constraint is not specific to our model of adjustment costs but also arises under Calvo price adjustment (see e.g. Braun and Waki (2010)).

Thursday, October 04, 2012

'Economists Played a Special Role in Contributing to the Problem'

This is from Andrew Haldane, Executive Director, Financial Stability, Bank of England:

What have the economists ever done for us?, by Andrew G Haldane, Vox EU: There is a long list of culprits when it comes to assigning blame for the financial crisis. At least in this instance, failure has just as many parents as success. But among the guilty parties, economists played a special role in contributing to the problem. We are duty bound to be part of the solution (see Coyle 2012). Our role in the crisis was, in a nutshell, the result of succumbing to an intellectual virus which took hold of the body financial from the 1990s onwards.
One strain of this virus is an old one. Cycles in money and bank credit are familiar from centuries past. And yet, for perhaps a generation, the symptoms of this old virus were left untreated. That neglect allowed the infection to spread from the financial system to the real economy, with near-fatal consequences for both.
In many ways, this was an odd disease to have contracted. The symptoms should have been all too obvious from history. The interplay of bank money and credit and the wider economy has been pivotal to the mandate of central banks for centuries. For at least a century, that was recognized in the design of public policy frameworks. The management of bank money and credit was a clear public policy prerequisite for maintaining broader macroeconomic and social stability.
Two developments – one academic, one policy-related – appear to have been responsible for this surprising memory loss. The first was the emergence of micro-founded dynamic stochastic general equilibrium (DGSE) models in economics. Because these models were built on real-business-cycle foundations, financial factors (asset prices, money and credit) played distinctly second fiddle, if they played a role at all.
The second was an accompanying neglect for aggregate money and credit conditions in the construction of public policy frameworks. Inflation targeting assumed primacy as a monetary policy framework, with little role for commercial banks' balance sheets as either an end or an intermediate objective. And regulation of financial firms was in many cases taken out of the hands of central banks and delegated to separate supervisory agencies with an institution-specific, non-monetary focus.
Coincidentally or not, what happened next was extraordinary. Commercial banks' balance sheets grew by the largest amount in human history. For example, having flat-lined for a century, bank assets-to-GDP in the UK rose by an order of magnitude from 1970 onwards. A similar pattern was found in other advanced economies.
This balance sheet explosion was, in one sense, no one’s fault and no one’s responsibility. Not monetary policy authorities, whose focus was now inflation and whose models scarcely permitted bank balance sheets a walk-on role. And not financial regulators, whose focus was on the strength of individual financial institutions.
Yet this policy neglect has since shown itself to be far from benign. The lessons of financial history have been painfully re-taught since 2008. They need not be forgotten again. This has important implications for the economics profession and for the teaching of economics. For one, it underscores the importance of sub-disciplines such as economic and financial history. As Galbraith said, "There can be few fields of human endeavor in which history counts for so little as in the world of finance." Economics can ill afford to re-commit that crime.
Second, it underlines the importance of reinstating money, credit and banking in the core curriculum, as well as refocusing on models of the interplay between economic and financial systems. These are areas that also fell out of fashion during the pre-crisis boom.
Third, the crisis showed that institutions really matter, be it commercial banks or central banks, when making sense of crises, their genesis and aftermath. They too were conveniently, but irresponsibly, airbrushed out of workhorse models. They now needed to be repainted back in.
The second strain of intellectual virus is a new, more virulent one. This has been made dangerous by increased integration of markets of all types, economic, but especially financial and social. In a tightly woven financial and social web, the contagious consequences of a single event can thus bring the world to its knees. That was the Lehman Brothers story.
These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.
These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics.
The financial system is an archetypical complex, adaptive socioeconomic system – and has become more so over time. In the early years of this century, financial chains lengthened dramatically, system-wide maturity mismatches widened alarmingly and intrafinancial system claims ballooned exponentially. The system became, in consequence, a hostage to its weakest link. When that broke, so too did the system as a whole. Communications networks and social media then propagated fear globally.
Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behavior and thus shape outcomes.
This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modeling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.

Wednesday, August 22, 2012

'Economics in Denial'

Howard Davies:

Economics in Denial, by Howard Davies, Commentary, Project Syndicate: In an exasperated outburst, just before he left the presidency of the European Central Bank, Jean-Claude Trichet complained that, “as a policymaker during the crisis, I found the available [economic and financial] models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.” ... It was a ... serious indictment of the economics profession, not to mention all those extravagantly rewarded finance professors in business schools from Harvard to Hyderabad. ...
But it is not clear that a majority of the profession yet accepts [this]... The so-called “Chicago School” has mounted a robust defense of its rational expectations-based approach, rejecting the notion that a rethink is required. The Nobel laureate economist Robert Lucas has argued that the crisis was not predicted because economic theory predicts that such events cannot be predicted. So all is well. ...
We should not focus attention exclusively on economists, however. Arguably the elements of the conventional intellectual toolkit found most wanting are the capital asset pricing model and its close cousin, the efficient-market hypothesis. Yet their protagonists see no problems to address.
On the contrary, the University of Chicago’s Eugene Fama has described the notion that finance theory was at fault as “a fantasy,” and argues that “financial markets and financial institutions were casualties rather than causes of the recession.” And the efficient-market hypothesis that he championed cannot be blamed...
Fortunately, others in the profession ... have been chastened by the events of the last five years... They are working hard ... to develop new approaches...

There is resistance from the old guard, but I'm modestly optimistic. Some people are trying to ask, and answer, the right questions. However, it's a slow process.

Tuesday, July 31, 2012

New Old Keynesians

From the archives (September 2009), for no particular reason:

New Old Keynesians: There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price sluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward to the IS-LM model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes it didn't, but that was no reason to ignore the answers when it did.
[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc. But as noted here, if you are going to take guidance from the older models it is essential that you understand their limitations -- these models should not be used without a thorough knowledge of the pitfalls involved and where they can and cannot be avoided -- the kind of knowledge someone like Paul Krugman surely has at hand.]

Saturday, July 21, 2012

Plosser: Macro Models and Monetary Policy Analysis

Charles Plosser, President of the Philadelphia Fed, explains the limitations of DSGE models, particularly models of the New Keynesian variety used for policy analysis. (He doesn't reject the DSGE methodology, and that will disappoint some, but he does raise good questions about this class of models. I believe the macroeconomics literature is going to fully explore these micro-founded, forward looking, optimizing models whether critics like it or not, so we may as well get on with it. The questions raised below help to clarify the direction the research should take, and in the end the models will either prove worthy, or be cast aside. In the meantime, I hope the macroeconomics profession will become more open to alternative ideas/models than it has been in the recent past, but I doubt the humility needed for that to happen has taken hold despite all the problems with these models that were exposed by the housing and financial crises.):

Macro Models and Monetary Policy Analysis, by Charles I. Plosser, President and Chief Executive Officer, Federal Reserve Bank of Philadelphia, Bundesbank — Federal Reserve Bank of Philadelphia Spring 2012 Research Conference, Eltville, Germany, May 25, 2012: Introduction ...After spending over 30 years in academia, I have served the last six years as a policymaker trying to apply what economics has taught me. Needless to say, I picked a challenging time to undertake such an endeavor. But I have learned that, despite the advances in our understanding of economics, a number of issues remain unresolved in the context of modern macro models and their use for policy analysis. In my remarks today, I will touch on some issues facing policymakers that I believe state-of-the-art macro models would do well to confront. Before continuing, I should note that I speak for myself and not the Federal Reserve System or my colleagues on the Federal Open Market Committee.

More than 40 years ago, the rational expectations revolution in macroeconomics helped to shape a consensus among economists that only unanticipated shifts in monetary policy can have real effects. According to this consensus, only monetary surprises affect the real economy in the short to medium run because consumers, workers, employers, and investors cannot respond quickly enough to offset the effect of these policy actions on consumption, the labor market, and investment.1

But over the years this consensus view on the transmission mechanism of monetary policy to the real economy has evolved. The current generation of macro models, referred to as New Keynesian DSGE models,2 rely on real and nominal frictions to transmit not only unanticipated but also systematic changes in monetary policy to the economy. Unexpected monetary shocks drive movements in output, consumption, investment, hours worked, and employment in DSGE models. However, in contrast to the earlier literature, it is the relevance of systematic movements in monetary policy that makes these models of so much interest for policy analysis. Systematic policy changes are represented in these models by Taylor-type rules, in which the policy interest rate responds to changes in inflation and a measure of real activity, such as output growth. Armed with forecasts of inflation and output growth, a central bank can assess the impact that different policy rate paths may have on the economy. The ability to do this type of policy analysis helps explain the widespread use of New Keynesian DSGE models at central banks around the world.

These modern macro models stress the importance of credibility and systematic policy, as well as forward-looking rational agents, in the determination of economic outcomes. In doing so, they offer guidance to policymakers about how to structure policies that will improve the policy framework and, therefore, economic performance. Nonetheless, I think there is room for improving the models and the advice they deliver on policy options. Before discussing several of these improvements, it is important to appreciate the “rules of the game” of the New Keynesian DSGE framework.

The New Keynesian Framework

New Keynesian DSGE models are the latest update to real business cycle, or RBC, theory. Given my own research in this area, it probably does not surprise many of you that I find the RBC paradigm a useful and valuable platform on which to build our macroeconomic models.3 One goal of real business cycle theory is to study the predictions of dynamic general equilibrium models, in which optimizing and forward-looking consumers, workers, employers, and investors are endowed with rational expectations. A shortcoming many see in the simple real business cycle model is its difficulty in internally generating persistent changes in output and employment from a transitory or temporary external shock to, say, productivity.4 The recognition of this problem has inspired variations on the simple model, of which the New Keynesian revival is an example.

The approach taken in these models is to incorporate a structure of real and nominal frictions into the real business cycle framework. These frictions are placed in DSGE models, in part, to make real economic activity respond to anticipated and unanticipated changes in monetary policy, at least, in the short to medium run. The real frictions that drive internal propagation of monetary policy often include habit formation in consumption, that is, how past consumption influences current consumption; the costs of capital used in production; and the resources expended by adding new investment to the existing stock of capital. New Keynesian DSGE models also include the costs faced by monopolistic firms and households when setting their prices and nominal wages. A nominal friction often assumed in Keynesian DSGE models is that firms and households have to wait a fixed interval of time before they can reset their prices and wages in a forward-looking, optimal manner. A rule of the game in these models is that the interactions of these nominal frictions with real frictions give rise to persistent monetary nonneutralities over the business cycle.5 It is this monetary transmission mechanism that makes the New Keynesian DSGE models attractive to central banks.

An assumption of these models is that the structure of these real and nominal frictions, which transmit changes in monetary policy to the real economy, well-approximate the true underlying rigidities of the actual economy and are not affected by changes in monetary policy. This assumption implies that the frictions faced by consumers, workers, employers, and investors cannot be eliminated at any price they might be willing to pay. Although the actors in actual economies probably recognize the incentives they have to innovate — think of the strategy to use continuous pricing on line for many goods and services — or to seek insurance to minimize the costs of the frictions, these actions and markets are ruled out by the “rules of the game” of New Keynesian DSGE modeling.

Another important rule of the game prescribes that monetary policy is represented by an interest rate or Taylor-type reaction function that policymakers are committed to follow and that everyone believes will, in fact, be followed. This ingredient of New Keynesian DSGE models most often commits a central bank to increase its policy rate when inflation or output rises above the target set by the central bank. And this commitment is assumed to be fully credible according to the rules of the game of New Keynesian DSGE models. Policy changes are then evaluated as deviations from the invariant policy rule to which policymakers are credibly committed.

The Lucas Critique Revisited with Respect to New Keynesian DSGE Models

In my view, the current rules of the game of New Keynesian DSGE models run afoul of the Lucas critique — a seminal work for my generation of macroeconomists and for each generation since.6 The Lucas critique teaches us that to do policy analysis correctly, we must understand the relationship between economic outcomes and the beliefs of economic agents about the policy regime. Equally important is the Lucas critique’s warning against using models whose structure changes with the alternative government policies under consideration.7 Policy changes are almost never once and for all. So, many economists would argue that an economic model that maps states of the world to outcomes but that does not model how policy shifts across alternative regimes would fail the Lucas critique because it would not be policy invariant.8 Instead, economists could better judge the effects of competing policy options by building models that account for the way in which policymakers switch between alternative policy regimes as economic circumstances change.9

For example, I have always been uncomfortable with the New Keynesian model’s assumption that wage and price setters have market power but, at the same time, are unable or unwilling to change prices in response to anticipated and systematic shifts in monetary policy. This suggests that the deep structure of nominal frictions in New Keynesian DSGE models should do more than measure the length of time that firms and households wait for a chance to reset their prices and wages.10 Moreover, it raises questions about the mechanism by which monetary policy shocks are transmitted to the real economy in these models.

I might also note here that the evidence from micro data on price behavior is not particularly consistent with the implications of the usual staggered price-setting assumptions in these models.11 When the real and nominal frictions of New Keynesian models do not reflect the incentives faced by economic actors in actual economies, these models violate the Lucas critique’s policy invariance dictum, and thus, the policy advice these models offer must be interpreted with caution.

From a policy perspective, the assumption that a central bank can always and everywhere credibly commit to its policy rule is, I believe, also questionable. While it is desirable for policymakers to do so — and in practice, I seek ways to make policy more systematic and more credible — commitment is a luxury few central bankers ever actually have, and fewer still faithfully follow.

During the 1980s and 1990s, it was quite common to hear in workshops and seminars the criticism that a model didn’t satisfy the Lucas critique. I thought this was often a cheap shot because almost no model satisfactorily dealt with the issue. And during a period when the policy regime was apparently fairly stable — which many argued it mostly was during those years — the failure to satisfy the Lucas critique seemed somewhat less troublesome. However, in my view, throughout the crisis of the last few years and its aftermath, the Lucas critique has become decidedly more relevant. Policy actions have become increasingly discretionary. Moreover, the financial crisis and associated policy responses have left many central banks operating with their policy rate near the zero lower bound; this means that they are no longer following a systematic rule, if they ever were. Given that central bankers are, in fact, acting in a discretionary manner, whether it is because they are at the zero bound or because they cannot or will not commit, how are we to interpret policy advice coming from models that assume full commitment to a systematic rule? I think this point is driven home by noting that a number of central banks have been openly discussing different regimes, from price-level targeting to nominal GDP targeting. In such an environment where policymakers actively debate alternative regimes, how confident can we be about the policy advice that follows from models in which that is never contemplated?

Some Directions for Furthering the Research Agenda

While I have been pointing out some limitations of DSGE models, I would like to end my remarks with six suggestions I believe would be fruitful for the research agenda.

First, I believe we should work to give the real and nominal frictions that underpin the monetary propagation mechanism of New Keynesian DSGE models deeper and more empirically supported structural foundations. There is already much work being done on this in the areas of search models applied to labor markets and studies of the behavior of prices at the firm level. Many of you at this conference have made significant contributions to this literature.

Second, on the policy dimension, the impact of the zero lower bound on central bank policy rates remains, as a central banker once said, a conundrum. The zero lower bound introduces nonlinearity into the analysis of monetary policy that macroeconomists and policymakers still do not fully understand. New Keynesian models have made some progress in solving this problem,12 but a complete understanding of the zero bound conundrum involves recasting a New Keynesian DSGE model to show how it can provide an economically meaningful story of the set of shocks, financial markets, and frictions that explain the financial crisis, the resulting recession, and the weak recovery that has followed. This might be asking a lot, but a good challenge usually produces extraordinary research.

Third, we must make progress in our analysis of credibility and commitment. The New Keynesian framework mostly assumes that policymakers are fully credible in their commitment to a specified policy rule. If that is not the case in practice, how do policymakers assess the policy advice these models deliver? Policy at the zero lower bound is a leading example of this issue. According to the New Keynesian model, zero lower bound policies rely on policymakers guiding the public’s expectations of when an initial interest rate increase will occur in the future. If the credibility of this forward guidance is questioned, evaluation of the zero lower bound policy has to account for the public's beliefs that commitment to this policy is incomplete. I have found that policymakers like to presume that their policy actions are completely credible and then engage in decisions accordingly. Yet if that presumption is wrong, those policies will not have the desired or predicted outcomes. Is there a way to design and estimate policy responses in such a world? Can reputational models be adapted for this purpose?

Fourth, and related, macroeconomists need to consider how to integrate the institutional design of central banks into our macroeconomic models. Different designs permit different degrees of discretion for a central bank. For example, responsibility for setting monetary policy is often delegated by an elected legislature to an independent central bank. However, the mandates given to central banks differ across countries. The Fed is often said to have a dual mandate; some banks have a hierarchal mandate; and others have a single mandate. Yet economists endow their New Keynesian DSGE models with strikingly uniform Taylor-type rules, always assuming complete credibility. Policy analysis might be improved by considering the institutional design of central banks and how it relates to the ability to commit and the specification of the Taylor-type rules that go into New Keynesian models. Central banks with different levels of discretion will respond differently to the same set of shocks.

Let me offer a slightly different take on this issue. Policymakers are not Ramsey social planners. They are individuals who respond to incentives like every other actor in the economy. Those incentives are often shaped by the nature of the institutions in which they operate. Yet the models we use often ignore both the institutional environment and the rational behavior of policymakers. The models often ask policymakers to undertake actions that run counter to the incentives they face. How should economists then think about the policy advice their models offer and the outcomes they should expect? How should we think about the design of our institutions? This is not an unexplored arena, but if we are to take the policy guidance from our models seriously, we must think harder about such issues in the context of our models.

This leads to my fifth suggestion. Monetary theory has given a great deal of thought to rules and credibility in the design of monetary policy, but the recent crisis suggests that we need to think more about the design of lender-of-last-resort policy and the institutional mechanism for its execution. Whether to act as the lender of last resort is discretionary, but does it have to be so? Are there ways to make it more systematic ex ante? If so, how?

My sixth and final thought concerns moral hazard, which is addressed in only a handful of models. Moral hazard looms large when one thinks about lender-of-last-resort activities. But it is also a factor when monetary policy uses discretion to deviate from its policy rule. If the central bank has credibility that it will return to the rule once it has deviated, this may not be much of a problem. On the other hand, a central bank with less credibility, or no credibility, may run the risk of inducing excessive risk-taking. An example of this might be the so-called “Greenspan put,” in which the markets perceived that when asset prices fell, the Fed would respond by reducing interest rates. Do monetary policy actions that appear to react to the stock market induce moral hazard and excessive risk-taking? Does having lender-of-last-resort powers influence the central bank’s monetary policy decisions, especially at moments when it is not clear whether the economy is in the midst of a financial crisis? Does the combination of lender-of-last-resort responsibilities with discretionary monetary policy create moral hazard perils for a central bank, encouraging it to take riskier actions? I do not know the answer to these questions, but addressing them and the other challenges I have mentioned with New Keynesian DSGE models should prove useful for evaluating the merits of different institutional designs for central banks.

Conclusion

The financial crisis and recession have raised new challenges for policymakers and researchers. The degree to which policy actions, for better or worse, have become increasingly discretionary should give us pause as we try to evaluate policy choices in the context of the workhorse New Keynesian framework, especially given its assumption of credibly committed policymakers. Indeed, the Lucas critique would seem to take on new relevance in this post-crisis world. Central banks need to ask if discretionary policies can create incentives that fundamentally change the actions and expectations of consumers, workers, firms, and investors. Characterizing policy in this way also raises issues of whether the institutional design of central banks matters for evaluating monetary policy. I hope my comments today encourage you, as well as the wider community of macroeconomists, to pursue these research questions that are relevant to our efforts to improve our policy choices.

References and Footnotes

Continue reading "Plosser: Macro Models and Monetary Policy Analysis" »

Wednesday, July 11, 2012

Arrogance and Self-Satisfaction among Macroeconomists???

Simon Wren-Lewis:

Crisis, what crisis? Arrogance and self-satisfaction among macroeconomists, by Simon Wren-Lewis: My recent post on economics teaching has clearly upset a number of bloggers. There I argued that the recent crisis has not led to a fundamental rethink of macroeconomics. Mainstream macroeconomics has not decided that the Great Recession implies that some chunk of what we used to teach is clearly wrong and should be jettisoned as a result. To some that seems self-satisfied, arrogant and profoundly wrong. ...
Let me be absolutely clear that I am not saying that macroeconomics has nothing to learn from the financial crisis. What I am suggesting is that when those lessons have been learnt, the basics of the macroeconomics we teach will still be there. For example, it may be that we need to endogenise the difference between the interest rate set by monetary policy and the interest rate actually paid by firms and consumers, relating it to asset prices that move with the cycle. But if that is the case, this will build on our current theories of the business cycle. Concepts like aggregate demand, and within the mainstream, the natural rate, will not disappear. We clearly need to take default risk more seriously, and this may lead to more use of models with multiple equilibria (as suggested by Chatelain and Ralf, for example). However, this must surely use the intertemporal optimising framework that is the heart of modern macro.
Why do I want to say this? Because what we already have in macro remains important, valid and useful. What I see happening today is a struggle between those who want to use what we have, and those that want to deny its applicability to the current crisis. What we already have was used (imperfectly, of course) when the financial crisis hit, and analysis clearly suggests this helped mitigate the recession. Since 2010 these positive responses have been reversed, with policymakers around the world using ideas that contradict basic macro theory, like expansionary austerity. In addition, monetary policy makers appear to be misunderstanding ideas that are part of that theory, like credibility. In this context, saying that macro is all wrong and we need to start again is not helpful.
I also think there is a danger in the idea that the financial crisis might have been avoided if only we had better technical tools at our disposal. (I should add that this is not a mistake most heterodox economists would make.) ... The financial crisis itself is not a deeply mysterious event. Look now at the data on leverage that we had at the time, but too few people looked at before the crisis, and the immediate reaction has to be that this cannot go on. So the interesting question for me is how those that did look at this data managed to convince themselves that, to use the title from Reinhart and Rogoff’s book, this time was different.
One answer was that they were convinced by economic theory that turned out to be wrong. But it was not traditional macro theory – it was theories from financial economics. And I’m sure many financial economists would argue that those theories were misapplied. Like confusing new techniques for handling idiosyncratic risk with the problem of systemic risk, for example. Believing that evidence of arbitrage also meant that fundamentals were correctly perceived. In retrospect, we can see why those ideas were wrong using the economics toolkit we already have. So why was that not recognised at the time? I think the key to answering this does not lie in any exciting new technique from physics or elsewhere, but in political science.
To understand why regulators and others missed the crisis, I think we need to recognise the political environment at the time, which includes the influence of the financial sector itself. And I fear that the academic sector was not exactly innocent in this either. A simplistic take on economic theory (mostly micro theory rather than macro) became an excuse for rent seeking. The really big question of the day is not what is wrong with macro, but why has the financial sector grown so rapidly over the last decade or so. Did innovation and deregulation in that sector add to social welfare, or make it easier for that sector to extract surplus from the rest of the economy? And why are there so few economists trying to answer that question?

I have so many posts on the state of modern macro that it's hard to know where to begin, but here's a pretty good summary of my views on this particular topic:

I agree that the current macroeconomic models are unsatisfactory. The question is whether they can be fixed, or if it will be necessary to abandon them altogether. I am okay with seeing if they can be fixed before moving on. It's a step that's necessary in any case. People will resist moving on until they know this framework is a dead end, so the sooner we come to a conclusion about that, the better.
As just one example, modern macroeconomic models do not generally connect the real and the financial sectors. That is, in standard versions of the modern model linkages between the disintegration of financial intermediation and the real economy are missing. Since these linkages provide an important transmission mechanism whereby shocks in the financial sector can affect the real economy, and these are absent from models such as Eggertsson and Woodford, how much credence should I give the results? Even the financial accelerator models (which were largely abandoned because they did not appear to be empirically powerful, and hence were not part of the standard model) do not fully link these sectors in a satisfactory way, yet these connections are crucial in understanding why the crash caused such large economic effects, and how policy can be used to offset them. [e.g. see Woodford's comments, "recent events have made it clear that financial issues need to be integrated much more thoroughly into the basic framework for macroeconomic analysis with which students are provided."]
There are many technical difficulties with connecting the real and the financial sectors. Again, to highlight just one aspect of a much, much larger list of issues that will need to be addressed, modern models assume a representative agent. This assumption overcomes difficult problems associated with aggregating individual agents into macroeconomic aggregates. When this assumption is dropped it becomes very difficult to maintain adequate microeconomic foundations for macroeconomic models (setting aside the debate over the importance of doing this). But representative (single) agent models don't work very well as models of financial markets. Identical agents with identical information and identical outlooks have no motivation to trade financial assets (I sell because I think the price is going down, you buy because you think it's going up; with identical forecasts, the motivation to trade disappears). There needs to be some type of heterogeneity in the model, even if just over information sets, and that causes the technical difficulties associated with aggregation. However, with that said, there have already been important inroads into constructing these models (e.g. see Rajiv Sethi's discussion of John Geanakoplos' Leverage Cycles). So while I'm pessimistic, it's possible this and other problems will be overcome.
But there's no reason to wait until we know for sure if the current framework can be salvaged before starting the attempt to build a better model within an entirely different framework. Both can go on at the same time. What I hope will happen is that some macroeconomists will show more humility they've they've shown to date. That they will finally accept that the present model has large shortcomings that will need to be overcome before it will be as useful as we'd like. I hope that they will admit that it's not at all clear that we can fix the model's problems, and realize that some people have good reason to investigate alternatives to the standard model. The advancement of economics is best served when alternatives are developed and issued as challenges to the dominant theoretical framework, and there's no reason to deride those who choose to do this important work.
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.

This post of an email from Mark Gertler in July 2009 argues that modern macro has been mis-characterized:

The current crisis has naturally led to scrutiny of the economics profession. The intensity of this scrutiny ratcheted up a notch with the Economist’s interesting cover story this week on the state of academic economics.
I think some of the criticism has been fair. The Great Moderation gave many in the profession the false sense that we had handled the problem of the business cycle as well as we could. Traditional applied macroeconomic research on booms and busts and macroeconomic policy fell into something of a second class status within the field in favor of more exotic topics.
At the same time, from the discussion thus far, I don’t think the public is getting the full picture of what has been going on in the profession. From my vantage, there has been lots of high quality “middle ground” modern macroeconomic research that has been relevant to understanding and addressing the current crisis.
Here I think, though, that both the mainstream media and the blogosphere have been confusing a failure to anticipate the crisis with a failure to have the research available to comprehend it. Predicting the crisis would have required foreseeing the risks posed by the shadow banking system, which were missed not only by academic economists, but by just about everyone else on the planet (including the ratings agencies!).
But once the crisis hit, broadly speaking, policy-makers at the Federal Reserve made use of academic research on financial crises to help diagnose the situation and design the policy response. Research on monetary and fiscal policy when the nominal interest is at the zero lower bound has also been relevant. Quantitative macro models that incorporate financial factors, which existed well before the crisis, are rapidly being updated in light of new insights from the unfolding of recent events. Work on fiscal policy, which admittedly had been somewhat dormant, is now proceeding at a rapid pace.
Bottom line: As happened in both the wake of the Great Depression and the Great Stagflation, economic research is responding. In this case, the time lag will be much shorter given the existing base of work to build on. Revealed preference confirms that we still have something useful to offer: Demand for our services by the ultimate consumers of modern applied macro research – policy makers and staff at central banks – seems to be higher than ever.
Mark Gertler,
Henry and Lucy Moses Professor of Economics
New York University
[I ... also posted a link to his Mini-Course, "Incorporating Financial Factors Within Macroeconomic Modelling and Policy Analysis"... This course looks at recent work on integrating financial factors into macro modeling, and is a partial rebuttal to the assertion above that New Keynesian models do not have mechanisms built into them that can explain the financial crisis. ...]

Again, it wasn't the tools and techniques we use, we were asking the wrong questions. As I've argued many times, we were trying to explain normal times, the Great Moderation. Many (e.g. Lucas) thought the problem of depressions due to, say, a breakdown in the financial sector had been solved, so why waste time on those questions? Stabilization policy was passé, and we should focus on growth instead. So, I would agree with Simon Wren-Lewis that "we need to recognise the political environment at the time." But as I argued in The Economist, we also have to think about the sociology within the profession that worked against the pursuit of these ideas.

Perhaps Ricardo Cabellero says it better, so let me turn it over to him. From a post in late 2010:

Caballero says "we should be in “broad-exploration” mode." I can hardly disagree since that's what I meant when I said "While I think we should see if the current models and tools can be amended appropriately to capture financial crises such as the one we just had, I am not as sure as [Bernanke] is that this will be successful and I'd like to see [more] openness within the profession to a simultaneous investigation of alternatives."

Here's a bit more from the introduction to the paper:

The recent financial crisis has damaged the reputation of macroeconomics, largely for its inability to predict the impending financial and economic crisis. To be honest, this inability to predict does not concern me much. It is almost tautological that severe crises are essentially unpredictable, for otherwise they would not cause such a high degree of distress... What does concern me of my discipline, however, is that its current core—by which I mainly mean the so-called dynamic stochastic general equilibrium approach has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one. ...
To be fair to our field, an enormous amount of work at the intersection of macroeconomics and corporate finance has been chasing many of the issues that played a central role during the current crisis, including liquidity evaporation, collateral shortages, bubbles, crises, panics, fire sales, risk-shifting, contagion, and the like.1 However, much of this literature belongs to the periphery of macroeconomics rather than to its core. Is the solution then to replace the current core for the periphery? I am tempted—but I think this would address only some of our problems. The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer. In contrast, the periphery is not nearly as ambitious, and it provides mostly qualitative insights. So we are left with the tension between a type of answer to which we aspire but that has limited connection with reality (the core) and more sensible but incomplete answers (the periphery).
This distinction between core and periphery is not a matter of freshwater versus saltwater economics. Both the real business cycle approach and its New Keynesian counterpart belong to the core. ...
I cannot be sure that shifting resources from the current core to the periphery and focusing on the effects of (very) limited knowledge on our modeling strategy and on the actions of the economic agents we are supposed to model is the best next step. However, I am almost certain that if the goal of macroeconomics is to provide formal frameworks to address real economic problems rather than purely literature-driven ones, we better start trying something new rather soon. The alternative of segmenting, with academic macroeconomics playing its internal games and leaving the real world problems mostly to informal commentators and "policy" discussions, is not very attractive either, for the latter often suffer from an even deeper pretense-of-knowledge syndrome than do academic macroeconomists. ...

My main message is that yes, we need to push the DSGE structure as far as we can and see if it can be satisfactorily amended. Ask the right questions, and use the tools and techniques associated with modern macro to build the right models. But it's not at all clear that the DSGE methodology is up to the task, so let's not close our eyes -- or worse actively block -- the search for alternative theoretical structures.

Tuesday, July 03, 2012

Physicists in Finance Should Pay More Attention to Economists

New column:

Physicists Can Learn from Economists, by Mark Thoma: After attending last year’s Economics Nobel Laureates Meeting in Lindau, Germany, I was very critical of what I heard from the laureates at the meeting.  The conference is intended to bring graduate students together with the Nobel Prize winners to learn about fruitful areas for future research. Yet, with all the challenges the Great Recession posed for macroeconomic models, very little of the conference was devoted to anything related to the Great Recession. And when it did come up, the comments were “all over the map.” And some, such as Ed Prescott, were particularly appalling as they made very obvious political statements in the guise of economic analysis. I felt bad for the students who had come to the conference hoping to gain insight about where macroeconomics was headed in the future.
I am back at the meetings this year, but the topic is physics, not economics, and it’s pretty clear that most physicists think they have nothing to learn from lowly economists. That’s true even when they are working on problems in economics and finance.
But they do have something to learn. ...

Saturday, June 30, 2012

'Macroeconomics and the Centrist Dodge'

I think I've made this point repeatedly, though I tend to use the term ideological instead of political, but just in cast the message hasn't gotten through:

Macroeconomics and the Centrist Dodge, by Paul Krugman: Simon Wren-Lewis says something quite similar to my own view about the trouble with macroeconomics: it’s mostly political. And although Wren-Lewis bends over backwards to avoid saying it too bluntly, most – not all, but most – of the problem comes from the right. ...
By now, the centrist dodge ought to be familiar. A Very Serious, chin-stroking pundit argues that what we really need is a political leader willing to concede that while the economy needs short-run stimulus, we also need to address long-term deficits, and that addressing those long-term deficits will require both spending cuts and revenue increases. And then the pundit asserts that both parties are to blame for the absence of such leaders. What he absolutely won’t do is endanger his centrist credentials by admitting that the position he’s just outlined is exactly, exactly, the position of Barack Obama.
The macroeconomics equivalent looks like this: a concerned writer or speaker on economics bemoans the state of the field and argues that what we really need are macroeconomists who are willing to approach the subject with an open mind and change their views if the evidence doesn’t support their model. He or she concludes by scolding the macroeconomics profession in general, which is a nice safe thing to do – but requires deliberately ignoring the real nature of the problem.
For the fact is that it’s not hard to find open-minded macroeconomists willing to respond to the evidence. These days, they’re called Keynesians and/or saltwater macroeconomists. ...
Would Keynesians have been willing to change their views drastically if the experience of the global financial crisis had warranted such a change? I’d like to think so – but we’ll never know for sure, because the basic Keynesian view has in fact worked very well in the crisis.
But then there’s the other side – freshwater, equilibrium, more or less classical macro.
Recent events have been one empirical debacle after another for that view of the world – on interest rates, on inflation, on the effects of fiscal contraction. But the truth is that freshwater macro has been failing empirical tests for decades. Everywhere you turn there are anomalies that should have had that side of the profession questioning its premises, from the absence of the technology shocks that were supposed to drive business cycles, to the evident effectiveness of monetary policy, to the near-perfect correlation of nominal and real exchange rates.
But rather than questioning its premises, that side of the field essentially turned its back on evidence, calibrating its models rather than testing them, and refusing even to teach alternative views.
So there’s the trouble with macro: it’s basically political, and it’s mainly – not entirely, but mainly – coming from one side. Yet this truth is precisely what the critics won’t acknowledge, because that would endanger their comfortable position of scolding everyone equally. It is, in short, the centrist dodge carried over to conflict within economics.
Do we need better macroeconomics? Indeed we do. But we also need better critics, who are prepared to take the risk of actually taking sides for good economics and against dogmatism.

Before adding a few comments, I want to be careful to distinguish the "Keynesianism" discussed above from the New Keynesian model. I'll end up rejecting the standard NK model, but in doing so I am not rejecting Keynesian concepts. As Krugman summarizes, these are things like "the concept of the liquidity trap..., acceptance ... that wages are downwardly rigid – and hence that the natural rate hypothesis breaks down at low inflation.

Let me start by noting that one of the best examples of a  macroeconomic model being rejected that I know of is the New Classical model and its prediction that only unanticipated money matters for real variables such as employment and GDP. At first, Robert Barro and others thought the empirical evidence favored this model, but over time it became clear that both anticipated and unanticipated money matters. That is, the prediction was wrong and the model was rejected (it had other problems as well, e.g. explaining both the magnitude and duration of business cycles).

However, the response has been interesting, and it proceeds along the political lines discussed above. Some economists just can't accept that money might matter, and therefore that the government (through the Fed) has an important role to play in managing the economy. And unfortunately, they have acted more like lawyers than scientists in their attempts to discredit New Keynesian and other models that have this implication. After all, markets work, and they work through movements in prices, so a sticky price NK model must be wrong. QED.

Now, it turns out that the New Keynesian model probably is wrong, or at least incomplete, but that's a view based upon evidence rather than ideology. Prior to the crisis, I was a fan of the NK model. Despite what those who couldn't let go of the markets must work point of view argued, I believed this model was better than any other model we had at explaining macroeconomic data. But while the NK model did an adequate job of explaining aggregate fluctuations and how monetary policy will affect the economy in normal times with mild business cycle fluctuations, i.e. from the mid 1980s until recently, it did a downright lousy job of explaining the Great Recession. When it got pushed into new territory by the Great Recession, the Calvo type price stickiness driving fluctuations in the NK model had little to say about the problems we were having and how to fix them.

Thus, from my point of view the Great Recession rejected the standard version of the NK model. Perhaps the model can be fixed by tacking on a financial sector and allowing financial intermediation breakdowns to impact the real economy -- there are models along these lines that people are working to improve -- we will have to see about that. A more general NK model that has one type of fluctuation in normal times -- the standard price stickiness effects -- and occasional large fluctuations from endogenous credit market breakdowns might do the trick (there were models of this type prior to the recession, but they weren't the standard in the profession, and they weren't well-integrated into the general NK structure). So we may be able to find a more general version of the model that can capture both normal and abnormal times. But, then again, we may not and, as I've said many times, we need to encourage the exploration of alternative theoretical structures.

But no matter what happens, some economists just won't accept a model that implies the government can do good through either monetary of fiscal policy, and they work very hard to construct alternatives that don't allow for this. There is less resistance to monetary policy, the evidence is hard to deny so some of these economists will admit that monetary policy can affect the economy positively (so long as the Fed is an independent technocratic body). But fiscal policy is resisted no matter the theoretical and empirical evidence. They have their ideological/political views, and any model inconsistent with them must be wrong.

Update: Noah Smith responds to Paul Krugman here.

Friday, June 29, 2012

'Science' without Falsification

Bryan Caplan is tired of being sneered at by "high-status academic economists":

The Curious Ethos of the Academic/Appointee, by Bryan Caplan: High-status academic economists often look down on economists who engage in blogging and punditry. Their view: If you can't "definitively prove" your claims, you should remain silent. 

At the same time, though, high-status academic economists often receive top political appointments. Part of their job is to stand behind the administration's party line. They don't merely make claims they can't definitively prove; to keep their positions, appointees have to make claims they don't even believe! Yet high-status academic economists are proud to accept these jobs - and their colleagues admire them for doing so. ...

Noah Smith has something to say about "definitive proof":

"Science" without falsification is no science, by Noah Smith: Simon Wren-Lewis notes that although plenty of new macroeconomics has been added in response to the recent crisis/depression, nothing has been thrown out...

Four years after a huge deflationary shock with no apparent shock to technology, asset-pricing papers and labor search papers and international finance papers and even some business-cycle papers continue to use models in which business cycles are driven by technology shocks. No theory seems to have been thrown out. And these are young economists writing these papers, so it's not a generational effect. ...

If smart people don't agree, it may because they are waiting for new evidence or because they don't understand each other's math. But if enough time passes and people are still having the same arguments they had a hundred years ago - as is exactly the case in macro today - then we have to conclude that very little is being accomplished in the field. The creation of new theories does not represent scientific progress until it is matched by the rejection of failed alternative theories.

The root problem here is that macroeconomics seems to have no commonly agreed-upon criteria for falsification of hypotheses. Time-series data - in other words, watching history go by and trying to pick out recurring patterns - does not seem to be persuasive enough to kill any existing theory. Nobody seems to believe in cross-country regressions. And there are basically no macro experiments. ...

So as things stand, macro is mostly a "science" without falsification. In other words, it is barely a science at all. Microeconomists know this. The educated public knows this. And that is why the prestige of the macro field is falling. The solution is for macroeconomists to A) admit their ignorance more often (see this Mankiw article and this Cochrane article for good examples of how to do this), and B) search for better ways to falsify macro theories in a convincing way.

I have a slightly different take on this. From a column last summer:

What Caused the Financial Crisis? Don’t Ask An Economist, by Mark Thoma: What caused the financial crisis that is still reverberating through the global economy? Last week’s 4th Nobel Laureate Meeting in Lindau, Germany – a meeting that brings Nobel laureates in economics together with several hundred young economists from all over the world – illustrates how little agreement there is on the answer to this important question.
Surprisingly, the financial crisis did not receive much attention at the conference. Many of the sessions on macroeconomics and finance didn’t mention it at all, and when it was finally discussed, the reasons cited for the financial meltdown were all over the map.
It was the banks, the Fed, too much regulation, too little regulation, Fannie and Freddie, moral hazard from too-big-to-fail banks, bad and intentionally misleading accounting, irrational exuberance, faulty models, and the ratings agencies. In addition, factors I view as important contributors to the crisis, such as the conditions that allowed troublesome runs on the shadow banking system after regulators let Lehman fail, were hardly mentioned.
Macroeconomic models have not fared well in recent years – the models didn’t predict the financial crisis and gave little guidance to policymakers, and I was anxious to hear the laureates discuss what macroeconomists need to do to fix them. So I found the lack of consensus on what caused the crisis distressing. If the very best economists in the profession cannot come to anything close to agreement about why the crisis happened almost four years after the recession began, how can we possibly address the problems? ...
How can some of the best economists in the profession come to such different conclusions? A big part of the problem is that macroeconomists have not settled on a single model of the economy, and the various models often deliver very different, contradictory advice on how to solve economic problems. The basic problem is that economics is not an experimental science. We use historical data rather than experimental data, and it’s possible to construct more than one model that explains the historical data equally well. Time and more data may allow us to settle on a particular model someday – as new data arrives it may favor one model over the other – but as long as this problem is present, macroeconomists will continue to hold opposing views and give conflicting advice.
This problem is not just of concern to macroeconomists; it has contributed to the dysfunction we are seeing in Washington as well. When Republicans need to find support for policies such as deregulation, they can enlist prominent economists – Nobel laureates perhaps – to back them up. Similarly, when Democrats need support for proposals to increase regulation, they can also count noted economists in their camp. If economists were largely unified, it would be harder for differences in Congress to persist, but unfortunately such unanimity is not generally present.
This divide in the profession also increases the possibility that the public will be sold false or misleading ideas intended to promote an ideological or political agenda.  If the experts disagree, how is the public supposed to know what to believe? They often don’t have the expertise to analyze policy initiatives on their own, so they rely on experts to help them. But when the experts disagree at such a fundamental level, the public can no longer trust what it hears, and that leaves it vulnerable to people peddling all sorts of crazy ideas.
When the recession began, I had high hopes that it would help us to sort between competing macroeconomic models. As noted above, it's difficult to choose one model over another because the models do equally well at explaining the past. But this recession is so unlike any event for which there is existing data that it pushes the models into new territory that tests their explanatory power (macroeconomic data does not exist prior to 1947 in most cases, so it does not include the Great Depression). But, disappointingly, even though I believe the data point clearly toward models that emphasize the demand side rather than the supply side as the source of our problems, the crisis has not propelled us toward a particular class of models as would be expected in a data-driven, scientific discipline. Instead, the two sides have dug in their heels and the differences – many of which have been aired in public – have become larger and more contentious than ever.

Finally, on the usefulness of microeconomic models for macroeconomists -- what is known as microfoundations -- see here: The Macroeconomic Foundations of Microeconomics.

Update: See here too: Why Economists Can't Agree, another column of mine from the past, and also Simon Wren-Lewis: What microeconomists think about macroeconomics.

Thursday, June 14, 2012

"Inflation Targeting is Dead"

Jeff Frankel takes up the question of inflation targeting versus nominal GDP targeting, and concludes that nominal GDP targeting has many advantages:

Nominal GDP Targeting Could Take the Place of Inflation Targeting, by Jeff Frankel: In my preceding blogpost, I argued that the developments of the last five years have sharply pointed up the limitations of Inflation Targeting (IT)...   But if IT is dead, what is to take its place as an intermediate target that central banks can use to anchor expectations?
The leading candidate to take the position of preferred nominal anchor is probably Nominal GDP Targeting.  It has gained popularity rather suddenly, over the last year.  But the idea is not new.  It had been a candidate to succeed money targeting in the 1980s, because it did not share the latter’s vulnerability to shifts in money demand.  Under certain conditions, it dominates not only a money target (due to velocity shocks) but also an exchange rate target  (if exchange rate shocks are large) and a price level target (if supply shocks are large).   First proposed by James Meade (1978), it attracted the interest in the 1980s of such eminent economists as Jim Tobin (1983), Charlie Bean (1983), Bob Gordon (1985), Ken West (1986), Martin Feldstein & Jim Stock (1994), Bob Hall & Greg Mankiw (1994), Ben McCallum (1987, 1999), and others.
Nominal GDP targeting was not adopted by any country in the 1980s.  Amazingly, the founders of the European Central Bank in the 1990s never even considered it on their list of possible anchors for euro monetary policy.  ...
But now nominal GDP targeting is back, thanks to enthusiastic blogging by Scott Sumner (at Money Illusion), Lars Christensen (at Market Monetarist), David Beckworth (at Macromarket Musings), Marcus Nunes (at Historinhas) and others.  Indeed, the Economist has held up the successful revival of this idea as an example of the benefits to society of the blogosphere.  Economists at Goldman Sachs have also come out in favor. 
Fans of nominal GDP targeting point out that it would not, like Inflation Targeting, have the problem of excessive tightening in response to adverse supply shocks. ...
In the long term, the advantage of a regime that targets nominal GDP is that it is more robust with respect to shocks than the competitors (gold standard, money target, exchange rate target, or CPI target).   But why has it suddenly gained popularity at this point in history...?  Nominal GDP targeting might also have another advantage in the current unfortunate economic situation that afflicts much of the world:  Its proponents see it as a way of achieving a monetary expansion that is much-needed at the current juncture.
Monetary easing in advanced countries since 2008, though strong, has not been strong enough to bring unemployment down rapidly nor to restore output to potential.  It is hard to get the real interest rate down when the nominal interest rate is already close to zero. This has led some, such as Olivier Blanchard and Paul Krugman, to recommend that central banks announce a higher inflation target: 4 or 5 per cent.  ...  But most economists, and an even higher percentage of central bankers, are loath to give up the anchoring of expected inflation at 2 per cent which they fought so long and hard to achieve in the 1980s and 1990s.  Of course one could declare that the shift from a 2 % target to 4 % would be temporary.  But it is hard to deny that this would damage the long-run credibility of the sacrosanct 2% number.   An attraction of nominal GDP targeting is that one could set a target for nominal GDP that constituted 4 or 5% increase over the coming year - which for a country teetering on the fence between recovery and recession would in effect supply as much monetary ease as a 4% inflation target - and yet one would not be giving up the hard-won emphasis on 2% inflation as the long-run anchor.
Thus nominal GDP targeting could help address our current problems as well as a durable monetary regime for the future.

It's hard to figure out how to fix the world if you don't have a reliable model that can explain what went wrong. The optimal money rule in a model depends upon the the way in which changes in monetary policy are transmitted to the real economy. Is it because of price rigidities? Wage rigidities? Information problems? Credit frictions and rationing? The best response to a negative shock to the economy varies depending upon what type of model the investigator is using.

Thus, for the moment we need robust rules. Inflation targeting works well in models with Calvo type price-rigidities, and a Taylor type rule often emerges from models in this general class, but is this the most robust rule in the face of model uncertainty? We don't know the true model of the macroeconomy, that ought to be clear at this point. Does inflation targeting work well when the underlying problem is a breakdown in financial intermediation or other big problems in the financial sector? I'm not at all convinced that it does - some of the best remedies in this case involve abandoning a strict adherence to an inflation target in the short-run.

So, in the best of all worlds I'd prefer to have a model of the economy that works, find the optimal policy rule for that model, and then execute it. In the world we live in, I want robust rules -- rules that work well in a variety of models and in the face of a variety of different types of shocks (or at least recognize that the rule has to change when the source of the problem switches from, say, price rigidities to a breakdown in financial intermediation). One message that comes out of the description of NGDP targeting above is that this approach does appear to be more robust than inflation targeting. It's not always better, in some models a standard Taylor type rule is the best that can be done. But it's becoming harder and harder to believe that the Great Recession can be adequately described by models of this type, and hence hard to believe that we are well served by policy rules that assume price rigidities are the main source of economic fluctuations.

Sunday, May 20, 2012

Did Samuelson and Solow Really Claim that the Phillips Curve was a Structural Relationship?

Like Robert Waldmann, I have always taught that the Phillips curve was initially promoted as a permanent tradeoff between inflation and unemployment. It was thought to be a menu of choices that allowed most any unemployment rate to be achieved so long as we were willing to accept the required inflation rate (a look at scatterplots from the UK and the US made it appear that the relationship was stable).

However, the story goes, Milton Friedman argued this was incorrect in his 1968 presidential address to the AEA. Estimates of the Phillips curve that produced stable looking relationships were based upon data from time periods when inflation expectations were stable and unchanging. Friedman warned that if policymakers tried to exploit this relationship and inflation expectations changed, the Phillips curve would shift in a way that would give policymakers the inflation they were after, but the unemployment rate would be unchanged. There would be costs (higher inflation), but not benefits (lower unemployment). When subsequent data appeared to validate Friedman's prediction, the New Classical, rational expectations, microfoundations view of the world began to gain credibility over the old Keynesian model (though the Keynesians eventually emerged with a New Keynesian model that has microfoundations, rational expectations, etc., and overcomes some of the problems with the New Classical model).

Robert Waldmann argues that the premise of this story -- that Samuelson and Solow thought the Phillips curve represented a permanent, exploitable tradeoff between inflation and unemployment -- is wrong:

The Short and Long-Run Phillips Curves: Did Samuelson and Solow claim that the Phillips Curve was a structural relationship showing a permanent tradeoff between inflation and unemployment? James Forder says no.

Paul Krugman, John Quiggin and others (including me) have argued that the one success of the critics of old Keynesian economics is the prediction that high inflation would become persistent and lead to stagflation. The old Keynesian error was to assume that the reduced form Phillips curve was a structural equation -- an economic law not a coincidence.

Quiggin and many others including me have noted that Keynes did not make this old Keynesian error... The old Keynesian error, if it occurred, was made later. I have claimed (in a lecture to surprised students) that it was made by Samuelson and Solow. Was it ?

This is an important question in the history of economic thought, because the alleged error serves as a demonstration of the necessity of basing macroeconomics on microeconomic foundations. For a decade or two (roughly 1980 through roughly 1990 something) it was widely accepted that, to avoid such errors, macroeconomists had to assume that agents have rational expectations even though we don't.

The pattern of a gross error by two economists with impressive track records and an important success based on an approach which has had difficultly forecasting or even dealing with real events ever since made me suspect that the actual claims of Samuelson and Solow have been distorted by their critics. To be frank. this guess is also based on a strong sense that the approach of Friedman and Lucas to rhetoric and debate is more brilliant than fair.

I am very lazy, so I have been planning to Google some for months. I finally did. ... I googled samuelson solow phillips curve

The third hit is the 2010 paper by Forder which discusses Samuelson and Solow (1960) (which I have never read). ... Forder quotes p 189
'What is most interesting is the strong suggestion that the relation, such as it is, has shifted upward slightly but noticeably in the forties and fifties'
So in the paper which allegedly claimed that the Phillips curve is stable, Solow and Samuelson said it had shifted up. Rather sooner than Friedman and Phelps no ?

So how has it become an accepted fact that Samuelson and Solow said the Phillips curve was stable ? This fact is held to be vitally centrally important to the debate about macroeconomic methodology and it is obviously not a fact at all. How can it be that a claim about what was written in one short clear paper is so central to the debate and that no one checks it ?

They did caption a figure with a Phillips curve "a menu of policy choices" but (OK this is a paraphrase not a quote)
After this they emphasized – again – that these 'guesses' related only to the 'next few years', and suggested that a low-demand policy might either improve the tradeoff by affecting expectations, or worsen it by generating greater structural unemployment. Then, considering the even longer run, they suggest that a low-demand policy might improve the efficiency of allocation and thereby speed growth, or, rather more graphically, that the result might be that it 'produced class warfare and social conflict and depress the level of research and technical progress' with the result that the rate of growth would fall.
So, finally after months of procrastinating, I spent a few minutes (at home without access to JStor) checking the claim that is central to the debate on macroeconomic methodology and found a very convincing argument that it is nonsense.

If that were possible, this experience would lower my opinion of macroeconomists (as always Robert Waldmann explicitly included).

Sunday, April 29, 2012

More INET Videos: Sessions on Complexity and Fiscal Policy

These videos are from the recent INET conference in Berlin:

Taking Stock of Complexity Economics: Which Problems Does It Illuminate?

Moderator

  • Thomas Homer-Dixon, Director, Waterloo Institute for Complexity and Innovation, University of Waterloo [On Farmer Video]

Speakers

Does the Effectiveness of Fiscal Stimulus Depend on the Context? Balance Sheet Overhangs, Open Economy Leakages, and Idle Resources

Moderator

  • Robin Wells, former Research Professor of Economics at Princeton University [On Corsetti Video]

Speakers