Category Archive for: Macroeconomics [Return to Main]

Thursday, August 27, 2015

'The Day Macroeconomics Changed'

Simon Wren-Lewis:

The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.

Wednesday, August 26, 2015

Ray Fair: The Future of Macro

Ray Fair:

The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...

He goes on to explain why. He concludes with:

... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.

Sunday, August 23, 2015

''Young Economists Feel They Have to be Very Cautious''

From an interview of Paul Romer in the WSJ:

...Q: What kind of feedback have you received from colleagues in the profession?

A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”

Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?

A: There was an interesting [2013] Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...

It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...

Friday, August 21, 2015

'Scientists Do Not Demonize Dissenters. Nor Do They Worship Heroes.'

Paul Romer's latest entry on "mathiness" in economics ends with:

Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.

[The reference to Joan Robinson is clarified in the full text.]

Monday, August 17, 2015

Stiglitz: Towards a General Theory of Deep Downturns

This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):

Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation.[2] As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained.[3] These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.[4]
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy.[5] It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...

He concludes with:

IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets.[66] Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.

Tuesday, August 11, 2015

Macroeconomics: The Roads Not Yet Taken

My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:

Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...

The Macroeconomic Divide

Paul Krugman:

Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...

Tuesday, August 04, 2015

'Sarcasm and Science'

On the road again, so just a couple of quick posts. This is Paul Krugman:

Sarcasm and Science: Paul Romer continues his discussion of the wrong turn of freshwater economics, responding in part to my own entry, and makes a surprising suggestion — that Lucas and his followers were driven into their adversarial style by Robert Solow’s sarcasm...
Now, it’s true that people can get remarkably bent out of shape at the suggestion that they’re being silly and foolish. ...
But Romer’s account of the great wrong turn still sounds much too contingent to me...
At least as I perceived it then — and remember, I was a grad student as much of this was going on — there were two other big factors.
First, there was a political component. Equilibrium business cycle theory denied that fiscal or monetary policy could play a useful role in managing the economy, and this was a very appealing conclusion on one side of the political spectrum. This surely was a big reason the freshwater school immediately declared total victory over Keynes well before its approach had been properly vetted, and why it could not back down when the vetting actually took place and the doctrine was found wanting.
Second — and this may be less apparent to non-economists — there was the toolkit factor. Lucas-type models introduced a new set of modeling and mathematical tools — tools that required a significant investment of time and effort to learn, but which, once learned, let you impress everyone with your technical proficiency. For those who had made that investment, there was a real incentive to insist that models using those tools, and only models using those tools, were the way to go in all future research. ...
And of course at this point all of these factors have been greatly reinforced by the law of diminishing disciples: Lucas’s intellectual grandchildren are utterly unable to consider the possibility that they might be on the wrong track.

Sunday, August 02, 2015

'Freshwater’s Wrong Turn'

Paul Krugman follows up on Paul Romer's latest attack on "mathiness":

Freshwater’s Wrong Turn (Wonkish): Paul Romer has been writing a series of posts on the problem he calls “mathiness”, in which economists write down fairly hard-to-understand mathematical models accompanied by verbal claims that don’t actually match what’s going on in the math. Most recently, he has been recounting the pushback he’s getting from freshwater macro types, who seem him as allying himself with evil people like me — whereas he sees them as having turned away from science toward a legalistic, adversarial form of pleading.
You can guess where I stand on this. But in his latest, he notes some of the freshwater types appealing to their glorious past, claiming that Robert Lucas in particular has a record of intellectual transparency that should insulate him from criticism now. PR replies that Lucas once was like that, but no longer, and asks what happened.
Well, I’m pretty sure I know the answer. ...

It's hard to do an extract capturing all the points, so you'll likely want to read the full post, but in summary:

So what happened to freshwater, I’d argue, is that a movement that started by doing interesting work was corrupted by its early hubris; the braggadocio and trash-talking of the 1970s left its leaders unable to confront their intellectual problems, and sent them off on the path Paul now finds so troubling.

Recent tweets, email, etc. in response to posts I've done on mathiness reinforce just how unwilling many are to confront their tribalism. In the past, I've blamed the problems in macro on, in part, the sociology within the profession (leading to a less than scientific approach to problems as each side plays the advocacy game) and nothing that has happened lately has altered that view.

Saturday, August 01, 2015

'Microfoundations 2.0?'

Daniel Little:

Microfoundations 2.0?: The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:

The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)

The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)
Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

So what are the recent challenges? First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In once sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

I've talked about this many times, e.g., but it's worth making this point about aggregating from individual agents to macroeconomic aggregates once again (it deals, for one, with the emergent properties objection above -- it's the reason representative agent models are used, it seems to avoid the aggregation issue). This is from Kevin Hoover:

... Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates that there is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.

Sunday, July 26, 2015

'The F Story about the Great Inflation'

Simon Wren-Lewis:

The F story about the Great Inflation: Here F could stand for folk. The story that is often told by economists to their students goes as follows. After Phillips discovered his curve, which relates inflation to unemployment, Samuelson and Solow in 1960 suggested this implied a trade-off that policymakers could use. They could permanently have a bit less unemployment at the cost of a bit more inflation. Policymakers took up that option, but then could not understand why inflation didn’t just go up a bit, but kept on going up and up. Along came Milton Friedman to the rescue, who in a 1968 presidential address argued that inflation also depended on inflation expectations, which meant the long run Phillips curve was vertical and there was no permanent inflation unemployment trade-off. Policymakers then saw the light, and the steady rise in inflation seen in the 1960s and 1970s came to an end.
This is a neat little story, particularly if you like the idea that all great macroeconomic disasters stem from errors in mainstream macroeconomics. However even a half awake student should spot one small difficulty with this tale. Why did it take over 10 years for Friedman’s wisdom to be adopted by policymakers, while Samuelson and Solow’s alleged mistake seems to have been adopted quickly? Even if you think that the inflation problem only really started in the 1970s that imparts a 10 year lag into the knowledge transmission mechanism, which is a little strange.
However none of that matters, because this folk story is simply untrue. There has been some discussion of this in blogs (by Robert Waldmann in particular - see Mark Thoma here), and the best source on this is another F: James Forder. There are papers (e.g. here), but the most comprehensive source is now his book, which presents an exhaustive study of this folk story. It is, he argues, untrue in every respect. Not only did Samuelson and Solow not argue that there was a permanent inflation unemployment trade-off that policymakers could exploit, policymakers never believed there was such a trade-off. So how did this folk story arise? Quite simply from another F: Friedman himself, in his Nobel Prize lecture in 1977.
Forder discusses much else in his book, including the extent to which Friedman’s 1968 emphasis on the importance of expectations was particularly original (it wasn’t). He also describes how and why he thinks Friedman’s story became so embedded that it became folklore....

Friday, July 24, 2015

Paul Krugman: The M.I.T. Gang

The MIT school of economics:

The M.I.T. Gang, by Paul Krugman, Commentary, NY Times: Goodbye, Chicago boys. Hello, M.I.T. gang.

If you don’t know what I’m talking about, the term “Chicago boys” was originally used to refer to Latin American economists, trained at the University of Chicago, who took radical free-market ideology back to their home countries. The influence of these economists was part of a broader phenomenon: The 1970s and 1980s were an era of ascendancy for laissez-faire economic ideas and the Chicago school...

But that was a long time ago. Now a different school is in the ascendant, and deservedly so.

It’s actually surprising how little media attention has been given to the dominance of M.I.T.-trained economists in policy positions and policy discourse. But it’s quite remarkable. Ben Bernanke has an M.I.T. Ph.D.; so do Mario Draghi, the president of the European Central Bank, and Olivier Blanchard, the enormously influential chief economist of the International Monetary Fund. Mr. Blanchard is retiring, but his replacement, Maurice Obstfeld, is another M.I.T. guy — and another student of Stanley Fischer, who taught at M.I.T. for many years and is now the Fed’s vice chairman. ...

M.I.T.-trained economists, especially Ph.D.s from the 1970s, play an outsized role ... in policy discussion across the Western world. And yes, I’m part of the same gang.

So what distinguishes M.I.T. economics, and why does it matter? ...

At M.I.T..., Keynes never went away. To be sure, stagflation showed that there were limits to what policy can do. But students continued to learn about the imperfections of markets and the role that monetary and fiscal policy can play in boosting a depressed economy. ...

This open-minded, pragmatic approach was overwhelmingly vindicated after crisis struck in 2008. Chicago-school types warned incessantly that responding to the crisis by printing money and running deficits would lead to 70s-type stagflation, with soaring inflation and interest rates. But M.I.T. types predicted, correctly, that inflation and interest rates would stay low in a depressed economy, and that attempts to slash deficits too soon would deepen the slump. ...

Meanwhile, in the United States, Republicans have responded to the utter failure of free-market orthodoxy and the remarkably successful predictions of much-hated Keynesians by digging in even deeper, determined to learn nothing from experience.

In other words, being right isn’t necessarily enough to change the world. But it’s still better to be right than to be wrong, and M.I.T.-style economics, with its pragmatic openness to evidence, has been very right indeed.

Sunday, July 19, 2015

The Rivals (Samuelson and Friedman)

This is by David Warsh:

The Rivals, Economic Principals: When Keynes died, in April 1946, The Times of London gave him the best farewell since Nelson after Trafalgar: “To find an economist of comparable influence one would have to go back to Adam Smith.” A few years later, Alvin Hansen, of Harvard University, Keynes’ leading disciple in the United States, wrote , “It may be a little too early to claim that, along with Darwin’s Origin of Species and Marx’s Capital, The General Theory is one of the most significant book which have appeared in the last hundred years. … But… it continues to gain in importance.”
In fact, the influence of Keynes’ book, as opposed to the vision of “macroeconomics” at the heart of it, and the penumbra of fame surrounding it, already had begun its downward arc. Civilians continued to read the book, more for its often sparkling prose than for the clarity of its argument. Among economists, intermediaries and translators had emerged in various communities to explain the insights the great man had sought to convey. Speaking of the group in Cambridge, Massachusetts, Robert Solow put it this way, many years later: “We learned not as much from it – it was…almost unreadable – as from a number of explanatory articles that appeared on all our graduate school reading lists.”
Instead it was another book that ushered in an era of economics very different from the age before. Foundations of Economic Analysis, by Paul A. Samuelson, important parts of it written as much as ten years before, appeared in 1947. “Mathematics is a Language,” proclaimed its frontispiece; equations dominated nearly every page. “It might be still too early to tell how the discoveries of the 1930s would pan out,” Samuelson wrote delicately in the introduction, but their value could be ascertained only by expressing them in mathematical models whose properties could be thoroughly explored and tested. “The laborious literary working-over of essentially simple mathematical concepts such as is characteristic of much of modern economic theory is not only unrewarding from the standpoint of advancing the science, but involves as well mental gymnastics of a particularly depraved type.”
Foundations had won a prize as a dissertation, so Harvard University was required to publish it as a book. In Samuelson’s telling, the department chairman had to be forced to agree to printing a thousand copies, dragged his feet, and then permitted its laboriously hand-set plates to be melted down for other uses after 887 copies were run off. Thus Foundations couldn’t be revised in subsequent printings, until a humbled Harvard University Press republished an “enlarged edition” with a new introduction and a mathematical appendix in 1983. When Samuelson biographer Roger Backhouse went through the various archival records, he concluded that the delay could be explained by production difficulties and recycling of the lead type by postwar exigencies at Press.
It didn’t matter. With the profession, Samuelson soon would win the day.
The “new” economics that he represented – the earliest developments had commenced in the years after World War I – conquered the profession, high and low. The next year Samuelson published an introductory textbook, Economics, to inculcate the young. Macroeconomic theory was to be put to work to damp the business cycle and, especially, avoid the tragedy of another Great Depression. The new approach swiftly attracted a community away from alternative modes of inquiry, in the expectation that it would yield new solutions to the pressing problem of depression-prevention. Alfred Marshall’s Principles of Economics eventually would be swept completely off the table. Foundations was a paradigm in the Kuhnian sense.
At the very zenith of Samuelson’s success, another sort of book appeared, in 1962, A Monetary History of the United States, 1869-1960, by Milton Friedman and Anna Schwartz, published by the National Bureau of Economic Research. At first glance, the two books had nothing to do with one another. A Monetary History harkened back to approaches that had been displaced by Samuelsonian methods – “hypotheses” instead of theorems; charts instead of models, narrative, not econometric analytics. The volume did little to change the language that Samuelson had established. Indeed, economists at the University of Chicago, Friedman’s stronghold, were on the verge of adapting a new, still- higher mathematical style to the general equilibrium approach that Samuelson had pioneered.
Yet one interpretation of the relationship between the price system and the Daedalean wings that A Monetary History contained was sufficiently striking as to reopen a question thought to have been settled. A chapter of their book, “The Great Contraction,” contained an interpretation of the origins of the Great Depression that gradually came to overshadow the rest. As J. Daniel Hammond has written,
The “Great Contraction” marked a watershed in thinking about the greatest economic calamity in modern times. Until Friedman and Schwartz provoked the interest of economists by rehabilitating monetary history and theory, neither economic theorists nor economic historians devoted as much attention to the Depression as historians.
So you could say that some part of the basic agenda of the next fifty years was ordained by the rivalry that began in the hour that Samuelson and Friedman became aware of each other, perhaps in the autumn of 1932, when both turned up the recently-completed Social Science Research Building of the University of Chicago, at the bottom of the Great Depression. Excellent historians, with access to extensive archives, have been working on both men’s lives and work: Hammond, of Wake Forest University, has largely completed his project on Friedman; Backhouse, of the University of Birmingham, is finishing a volume on Samuelson’s early years. Neither author has yet come across a frank recollection by either man of those first few meetings. Let’s hope one or more second-hand accounts turn up in the papers of the many men and women who knew them then. When I asked Friedman about their relationship in 2005, he deferred to his wife, who, somewhat uncomfortably, mentioned a differential in privilege. I lacked the temerity to ask Samuelson directly the last couple of times we talked; he clearly didn’t enjoy discussing it.
Biography is no substitute for history, much less for theory and history of thought, and journalism is, at best, only a provisional substitute for biography. But one way of understanding what happened in economics in the twentieth century is to view it as an argument between Samuelson and Friedman that lasted nearly eighty years, until one aspect of it, at least, was resolved by the financial crisis of 2008. The departments of economics they founded in Cambridge and Chicago, headquarters in the long wars between the Keynesians and the monetarists, came to be the Athens and Sparta of their day. ...[continue reading]...

[There is much, much more in the full post.]

Saturday, July 04, 2015

'Stability of a Market Economy'

"The macroeconomy is inherently unstable and ... booms and busts arise endogenously as the results of market incentives":

Stability of a market economy, by Paul Beaudry, Dana Galizia, and Franck Portier, Vox EU: There are two polar views about the functioning of a market economy.
  • On the one hand, there is the view that such a system is inherently stable, with market forces tending to direct the economy to a smooth growth path.

According to such a belief, most of the fluctuations in the macroeconomy result from either individually optimal adjustments to changes in the environment or from improper government interventions. In such a case, the role of macroeconomic policy should be to do no harm; if policymakers hold back from actively influencing the economy, market forces would take care of the rest and foster desirable outcomes.

  • On the other hand, there is the view that the market economy is inherently unstable, and that left to itself it will repeatedly go through periods of socially costly booms and busts, with recurrent periods of sustained high levels of unemployment.

According to this view, macroeconomic policy is needed to help stabilize an unruly system.    

Most modern macroeconomic models, such as those used by large central banks and governments, are somewhere in between these two extremes. However, they are by design much closer to the first view than the second, and this is generally not fully appreciated. In fact, most commonly used macroeconomic models have the feature that, in the absence of outside disturbances, the economy is expected to converge to a stable path. In this sense, these models are based on the premise that a decentralized economy is a stable system and that market forces, in of themselves, do not tend to produce boom and busts. The only reason why we see economic cycles in mainstream macroeconomic models is due to outside forces that perturb an otherwise stable system. We can call such a framework the stable-with-shocks view of the macroeconomy.

Stable-with-shocks view of the macroeconomy

There are many reasons why the economic profession has mostly adopted the stable-with-shocks view of macroeconomic fluctuations.

  • First, if we take a step back, and look at aggregate economic outcomes over long periods of time (say 100 years), the most striking feature is the stable growth path (see Figure 1).

Disregarding the two world wars, although the economy fluctuated, these fluctuations were small in comparison to the growth path. In particular, when looking over such long periods, it becomes clear that the economy looks more like a globally stable system than an unstable system.

  • Secondly, a huge fraction of economic theory suggests that market forces will favor stable outcomes. 
  • Thirdly, the stable-with-shocks framework is very tractable and flexible, allowing one to analyze economic outcomes using linear techniques.

Figure 1. Long-run evolution of GDP per capita

Vox1

Source: Bolt and van Zanden (2014).

Figure 2. Unemployment rates

Vox2

Source: FRED, Federal Reserve Bank of St. Louis.

Notwithstanding these attractive features of the stable-with-shocks view of the macroeconomy, the ubiquitous and recurrent nature of cycles in most market economies, as illustrated by the fluctuations of unemployment rates (see Figure 2), strongly suggests that a market economy, by its very nature, may create recurrent boom and bust independently of outside disturbances. This idea is well captured by the statement that “a bust sows the seed of the next boom”. Although, such an idea has a long tradition in the economics literature (Kalecki 1937, Kaldor 1940, Hicks 1950, Goodwin 1951), it is not present in most modern macro-models. 

Capturing economic fluctuations: New framework

In a companion paper (Beaudry et al. 2015), we have developed and explored an empirical framework that allows one to examine whether economic fluctuations may best be captured by the stable-with-shocks type framework or whether they may be better characterized as reflecting some sort of instability. To examine such an issue, one needs to depart from the preponderant convention in macroeconomics of focusing on linear models to analyze outcomes. A frequent criticism of macro-modelling, mostly from non-mainstream macroeconomists, is that the profession’s focus on linear models may have substantially biased our understanding of how the economy actually functions. As Blanchard (2014) writes, “We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time.”

Within a linear set-up, a dynamic system is either stable or unstable. In contrast, in a non-linear setup, a system can be globally stable while simultaneously being locally unstable. It is this latter characteristic that has the potential to be relevant in macroeconomics given that in the long run the economy appears rather stable, while in the short run it exhibits substantial volatility. By looking at the economy through a lens that allows for the possibility of non-linear dynamics, one is de facto permitting an interpretation of the economic fluctuations where endogenous cyclical behavior or even chaos may emerge; both features that are well known to arise in many dynamic environments. In other words, by looking at the economy using non-linear techniques we can ask whether market forces are tending to favor recurrent booms and busts, or whether they favor stability.

Our main finding is that, instead of favoring the conventional stable-with-shocks view for aggregate dynamics, our results suggest that the macroeconomy is inherently unstable and that booms and busts arise endogenously as the results of market incentives.

In fact, we found that for the US economy, market forces tend in of themselves to generate a cycle that lasts about eight years. However, these cycles are not regular or identical over time. Instead, outside forces play an important role in accelerating, amplifying, and postponing the forces that create cycles.   

What causes business cycles?

So what causes the economy to be unstable and exhibit business cycles? According to our analysis, this results from simple incentives that favour the coordination of behavior across households. In particular, in a market economy where individuals face unemployment risk, households have an incentive to buy housing and durable goods at similar time. The reason for these coordinated purchases is that when others are making large purchases, this reduces unemployment; then when unemployment is low, it is a less risky time to make large purchases since taking on debt is easier. However, let us emphasize that we are not finding that business cycles are driven primarily by animal spirits.

  • Instead, we are arguing that business cycles are driven by individually rational, but socially costly, mass behavior based on fundamentals.
  • In our view, the recovery phase of a cycle starts when the stock of housing and durables have been depleted enough to lead some people to go out and make new purchases even if unemployment is still high.

This incites others to do the same, which eventually sustains the recovery and leads to a boom. Interestingly, the boom does not stop when people have the ‘right’ stock of goods, but households instead over-shoot because the boom period is a good time to buy even knowing that a recession will eventually come. Once household have sufficiently over-accumulated, they will in mass stop purchasing, knowing that others are also stopping and knowing that they can wait out a recession benefiting for some time from the services of housing and durables bought during the expansion. The expansion therefore ends and a recession begins. Once this stock of good is again sufficiently depleted, the cycle will restart. Stated this way, business cycles appear very deterministic. However, there are always other developments in the economy that interact with this consumer cycle to create unique features. For example, the consumer cycle generally competes with forces affecting business investment, thereby causing the length and duration of a cycle to be affected by technological developments driving firm investment.  

Concluding remarks

But why should we care if the macroeconomy is locally unstable versus if it is locally stable? Society’s understanding of how the economy functions, especially what creates business cycles, greatly affects how we design stabilization policy. 

In the current dominant paradigm, there is a tendency to see monetary policy as the central tool for mitigating the business cycle. This view makes sense if excessive macroeconomic fluctuations reflect mainly the slow adjustment of wages and prices to outside disturbances within an otherwise stable system. 

However, if the system is inherently unstable and exhibits forces that favor recurrent booms and busts of about seven to ten years intervals, then it is much less likely that monetary policy is the right tool for addressing macroeconomic fluctuations. Instead, in such a case we are likely to need policies aimed at changing the incentives that lead household to bunch their purchasing behavior in the first place.

References

Beaudry, P , D Galizia, and F Portier, “Reviving the Limit Cycle View of Macroeconomic Fluctuations”, CEPR Discussion Paper 10645 and NBER working paper 21241.

Blanchard, O J (2014), “Where Danger Lurks”, Finance & Development, 51(3), 28-31.

Bolt, J and J L van Zanden (2014), “The Maddison Project: collaborative research on historical national accounts”, The Economic History Review, 67 (3): 627–651.

Goodwin, R (1951): “The Nonlinear Accelerator and the Persistence of Business Cycles”, Econometrica, 19(1), 1–17.

Hicks, J (1950), A Contribution to the Theory of the Trade Cycle, Clarendon Press, Oxford.

Kaldor, N (1940), “A Model of the Trade Cycle”, The Economic Journal, 50(197), 78–92.

Kalecki, M (1937), “A Theory of the Business Cycle”, The Review of Economic Studies, 4(2), 77–97.

Sunday, June 14, 2015

'What Assumptions Matter for Growth Theory?'

Dietz Vollrath explains the "mathiness" debate (and also Euler's theorem in a part of the post I left out). Glad he's interpreting Romer -- it's very helpful:

What Assumptions Matter for Growth Theory?: The whole “mathiness” debate that Paul Romer started tumbled onwards this week... I was able to get a little clarity in this whole “price-taking” versus “market power” part of the debate. I’ll circle back to the actual “mathiness” issue at the end of the post.
There are really two questions we are dealing with here. First, do inputs to production earn their marginal product? Second, do the owners of non-rival ideas have market power or not? We can answer the first without having to answer the second.
Just to refresh, a production function tells us that output is determined by some combination of non-rival inputs and rival inputs. Non-rival inputs are things like ideas that can be used by many firms or people at once without limiting the use by others. Think of blueprints. Rival inputs are things that can only be used by one person or firm at a time. Think of nails. The income earned by both rival and non-rival inputs has to add up to total output.
Okay, given all that setup, here are three statements that could be true.
  1. Output is constant returns to scale in rival inputs
  2. Non-rival inputs receive some portion of output
  3. Rival inputs receive output equal to their marginal product
Pick two.
Romer’s argument is that (1) and (2) are true. (1) he asserts through replication arguments, like my example of replicating Earth. (2) he takes as an empirical fact. Therefore, (3) cannot be true. If the owners of non-rival inputs are compensated in any way, then it is necessarily true that rival inputs earn less than their marginal product. Notice that I don’t need to say anything about how the non-rival inputs are compensated here. But if they earn anything, then from Romer’s assumptions the rival inputs cannot be earning their marginal product.
Different authors have made different choices than Romer. McGrattan and Prescott abandoned (1) in favor of (2) and (3). Boldrin and Levine dropped (2) and accepted (1) and (3). Romer’s issue with these papers is that (1) and (2) are clearly true, so writing down a model that abandons one of these assumptions gives you a model that makes no sense in describing growth. ...
The “mathiness” comes from authors trying to elide the fact that they are abandoning (1) or (2). ...

[There's a lot more in the full post. Also, Romer comments on Vollrath here.]

Tuesday, June 09, 2015

'What is it about German Economics?'

Can you help Simon Wren-Lewis figure this out?:

What is it about German economics?: ...Keynesian ideas are pretty mainstream elsewhere...: why does macroeconomics in Germany seem to be an outlier? Given the damage done by austerity in the Eurozone, and the central role that the views of German policy makers have played in that, this is a question I have asked for many years. The textbooks used to teach macroeconomics in Germany seem to be as Keynesian as elsewhere, yet Peter Bofinger is the only Keynesian on their Council of Economic Experts, and he confirmed to me how much this minority status is typical. [1]
There are two explanations that are popular outside Germany that I now think on their own are inadequate. The first is that Germany is preoccupied by inflation as a result of the hyperinflation of the Weimar republic, and that this spills over into their attitude to government debt. (The recession of the 1930s helped create a more serious disaster, and here is a provocative account of why the memory of hyperinflation dominates.) A second idea is that Germans are culturally debt averse, and people normally note that the German for debt is also their word for guilt. The trouble with both stories is that they imply that German government debt should be much lower than in other countries, but it is not. (In 2000, the German government’s net financial liabilities as a percentage of GDP were at the same level as France, and slightly above the UK and US.) ...
It is as if in some respects economic thinking in Germany has not moved on since the 1970s: Keynesian ideas are still viewed as anti-market rather than correcting market failure...
One of the distinctive characteristics of the German economy appears to be very far from neoliberalism, and that is co-determination: the importance of workers organisations in management, and more generally the recognition that unions play an important role in the economy. Yet I wonder whether this may have had an unintended consequence: the polarisation and politicisation of economic policy advice. ... If conflict over wages is institutionalised at the national level, perhaps the influence of ideology on economic policy - in so far as it influences that conflict (see footnote [1]) - is bound to be greater. 
As you can see, I remain some way from answering the question posed in the title of this post, but I think I’m a bit further forward than I was.

Saturday, June 06, 2015

'A Crisis at the Edge of Physics'

Seems like much the same can be said about modern macroeconomics (except perhaps the "given the field its credibility" part):

A Crisis at the Edge of Physics, by Adam Frank and Marcelo Gleiser, NY Times: Do physicists need empirical evidence to confirm their theories?
You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”
Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility. ...

'Views Differ on Shape of Macroeconomics'

Paul Krugman:

Views Differ on Shape of Macroeconomics: The doctrine of expansionary austerity ... was immensely popular among policymakers in 2010, as the great turn toward austerity began. But the statistical underpinnings of the doctrine fell apart under scrutiny... So at this point research economists overwhelmingly believe that austerity is contractionary (and that stimulus is expansionary). ...

Nonetheless, Simon Wren-Lewis points us to Robert Peston of the BBC declaring

I am simply pointing out that there is a debate here (though Krugman, Wren-Lewis and Portes are utterly persuaded they’ve won this match – and take the somewhat patronising view that voters who think differently are ignorant sheep led astray by a malign or blinkered media).

Wow. Yes, I suppose that “there is a debate” — there are debates about lots of things, from climate change to evolution to alien spaceships hidden in Area 51. But to suggest that this debate is at all symmetric is just wrong — and deeply misleading to one’s audience.

As for the claim that it’s somehow patronizing to suggest that voters are ill-informed when (a) macroeconomics is a technical subject, and (b) the media have indeed misreported the state of the professional debate — well, this is sort of an economic version of the line that one must not suggest that the Iraq war was launched on false pretenses, because this would be disrespectful to the troops. If you’re being accused of misleading reporting, it’s hardly a defense to say that the public believed your misinformation — more like a self-indictment. ...

Wednesday, June 03, 2015

'Coordination Equilibrium and Price Stickiness'

This is the introduction to a relatively new working paper by Cidgem Gizem Korpeoglu and Stephen Spear (sent in response to my comment that I've been disappointed with the development of new alternatives to the standard NK-DSGE models):

Coordination Equilibrium and Price Stickiness, by Cidgem Gizem Korpeoglu (University College London) Stephen E. Spear (Carnegie Mellon): 1 Introduction Contemporary macroeconomic theory rests on the three pillars of imperfect competition, nominal price rigidity, and strategic complementarity. Of these three, nominal price rigidity (aka price stickiness) has been the most important. The stickiness of prices is a well-established empirical fact, with early observations about the phenomenon going back to Alfred Marshall. Because the friction of price stickiness cannot occur in markets with perfect competition, modern micro-founded models (New Keynesian or NK models, for short) have been forced to abandon the standard Arrow-Debreu paradigm of perfect competition in favor of models where agents have market power and set market prices for their own goods. Strategic complementarity enters the picture as a mechanism for explaining the kinds of coordination failures that lead to sustained slumps like the Great Depression or the aftermath of the 2008 …financial crisis. Early work by Cooper and John laid out the importance of these three features for macroeconomics, and follow-on work by Ball and Romer showed that failure to coordinate on price adjustments could itself generate strategic complementarity, effectively unifying two of the three pillars.
Not surprisingly, the Ball and Romer work was based on earlier work by a number of authors (see Mankiw and Romer's New Keynesian Economics) which used the model of Dixit and Stiglitz of monopolistic competition as the basis for price-setting behavior in a general equilibrium setting, combined with the idea of menu costs -- literally the cost of posting and communicating price changes -- and exogenously-specified adjustment time staggering to provide the friction(s) leading to nominal rigidity. While these models perform well in explaining aspects of the business cycle, they have only recently been subjected to what one would characterize as thorough empirical testing, because of the scarcity of good data on how prices actually change. This has changed in the past decade as new sources of data on price dynamics have become available, and as computational power capable of teasing out what might be called the "…fine structure" of these dynamics has emerged. On a different dimension, the overall suitability of monopolistic competition as the appropriate form of market imperfection to use as the foundation of the new macro models has been largely unquestioned, though we believe this is largely due to the tractability of the Dixit-Stiglitz model relative to other models of imperfect competition generated by large …fixed costs or increasing returns to scale not due to specialization.
In this paper, we examine both of these underlying assumptions in light of what the new empirics on pricing dynamics has found, and propose a different, and we believe, better microfoundation for New Keynesian macroeconomics based on the Shapley-Shubik market game.

Krugman vs. DeLong

Krugman vs. DeLong has an outcome that follows DeLong's rule:

Krugman: The Inflationista Puzzle: Martin Feldstein has a new column on what he calls the “inflation puzzle” — the failure of inflation to soar despite the Fed’s large asset purchases, which led to a very large rise in the monetary base. As Tony Yates points out, however, there’s nothing puzzling at all about what happened; it’s exactly what you should expect when interest rates are near zero.
And this isn’t an ex-post rationale, it’s what many of us were saying from the beginning. Traditional IS-LM analysis said that the Fed’s policies would have little effect on inflation; so did the translation of that analysis into a stripped-down New Keynesian framework that I did back in 1998, starting the modern liquidity-trap literature. ...
DeLong: New Economic Thinking, Hicks-Hansen-Wicksell Macro, and Blocking the Back Propagation Induction-Unraveling from the Long Run Omega Point: ... Whatever may be going on in the short run must thus be transitory in duration, moderate in their effects, and limited in the distance it can can push the economy away from its proper long run equilibrium. And it certainly cannot keep it there. Not for long.
This is the real critique of Paul Krugman’s “depression economics”. Paul can draw his Hicksian IS-LM diagrams of an economy stuck in a liquidity trap...
He can draw his Wicksellian I=S diagrams of how the zero lower bound forces the market interest rate above the natural interest rate at which planned investment balances savings that would be expected were the economy at full employment...
Paul can show, graphically, that conventional monetary policy is then completely ineffective–swapping two assets that are perfect substitutes for each other. Paul can show, graphically, that expansionary fiscal policy is then immensely powerful and has no downside: it does not generate higher interest rates; it does not crowd out productive private investment; and, because interest rates are zero, it entails no financing burden and thus no required increase in future tax wedges. But all this is constrained and limited by the inescapable and powerful logic of the induction-unraveling propagating itself back through the game tree from the Omega Point that is the long run equilibrium. In the IS-LM diagram, the fact that the long run is out there means that even the contemplation of permanent expansion of the monetary base is rapidly moving the IS curve up and to the right, and thus leading the economy to quickly exist the liquidity trap. In the Wicksellian I=S diagram, the fact that the long run is out there means that even the contemplation of permanent expansion of the monetary base is rapidly moving the I=S curve up so that the zero lower bound will soon no longer constrain the economy away from its full-employment equilibrium.
The “depression economics” equilibrium Paul plots on his graph is a matter for today–a month or two, or a quarter or two, or at most a year or two. ...
Krugman:Backward Induction and Brad DeLong (Wonkish): Brad DeLong is, unusually, unhappy with my analysis in a discussion of the inflationista puzzle — the mystery of why so many economists failed to grasp the implications of a liquidity trap, and still fail to grasp those implications despite 6 years of being wrong. Brad sorta-kinda defends the inflationistas on the basis of backward induction; I find myself somewhat baffled by that defense.

Actually, I find myself baffled both theoretically and empirically. ...

In the end, while the post-2008 slump has gone on much longer than even I expected (thanks in part to terrible fiscal policy), and the downward stickiness of wages and prices has been more marked than I imagined, overall the model those of us who paid attention to Japan deployed has done pretty well — and it’s kind of shocking how few of those who got everything wrong are willing to learn from their failure and our success.
DeLong: Paul Krugman Was Right. I, Ken Rogoff, Marty Feldstein, and Many, Many Others Were Wrong: The question is: Why were we wrong? We had, after all, read, learned, and taught the same Hicks-Hansen-Wicksell-Metzler-Tobin macro that was Paul Krugman’s foundation. ...

I want to highlight one of Brad's points. Theoretical models often act as if there is only one type of demand shock, and the short-run depends upon a single variable, e.g. the time period when inflation expectations are wrong. But the short-run depends upon the type of recession we experience, and the variable that signals the length of the recovery will not be the same in every case. A monetary induced recession will have a much shorter short-run than a balance sheet recession induced by a financial collapse, and an recession caused by an oil price shock will recover differently from both. Early in the Great Recession, policymakers, analysts, and most economists did not fully recognize that this recession truly was different, and hence required a different policy approach from the recessions in recent memory. Krugman, due to his work on Japan, did see this early on, but it took time for the notion of a balance sheet recession to take hold, and we never fully adopted fiscal policy to deal with this fact (e.g. sufficient help with rebuilding household balance sheets). To me this in one of the big lessons of the Great Recession -- we must figure out the type of recession we are experiencing, realize that the "short-run" will depend critically on the type of shock causing the recession, and adopt our policies accordingly. If we can do that, then maybe the short-run won't be a decade long the next time we have a balance sheet recession. And there will be a next time.

Monday, June 01, 2015

'The Case of the Missing Minsky'

Paul Krugman says I'm not upbeat enough about the state of macroeconomics:

The Case of the Missing Minsky: Gavyn Davis has a good summary of the recent IMF conference on rethinking macro; Mark Thoma has further thoughts. Thoma in particular is disappointed that there hasn’t been more of a change, decrying

the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Maybe surprisingly, I’m a bit more upbeat than either. Of course there are economists, and whole departments, that have learned nothing, and remain wholly dominated by mathiness. But it seems to be that economists have done OK on two of the big three questions raised by the economic crisis. What are these three questions? I’m glad you asked. ...[continue]...

Sunday, May 31, 2015

'Has the Rethinking of Macroeconomic Policy Been Successful?'

The beginning of a long discussion from Gavyn Davies:

Has the rethinking of macroeconomic policy been successful?: The great financial crash of 2008 was expected to lead to a fundamental re-thinking of macro-economics, perhaps leading to a profound shift in the mainstream approach to fiscal, monetary and international policy. That is what happened after the 1929 crash and the Great Depression, though it was not until 1936 that the outline of the new orthodoxy appeared in the shape of Keynes’ General Theory. It was another decade or more before a simplified version of Keynes was routinely taught in American university economics classes. The wheels of intellectual change, though profound in retrospect, can grind fairly slowly.
Seven years after 2008 crash, there is relatively little sign of a major transformation in the mainstream macro-economic theory that is used, for example, by most central banks. The “DSGE” (mainly New Keynesian) framework remains the basic workhorse, even though it singularly failed to predict the crash. Economists have been busy adding a more realistic financial sector to the structure of the model [1], but labour and product markets, the heart of the productive economy, remain largely untouched.
What about macro-economic policy? Here major changes have already been implemented, notably in banking regulation, macro-prudential policy and most importantly the use of the central bank balance sheet as an independent instrument of monetary policy. In these areas, policy-makers have acted well in advance of macro-economic researchers, who have been struggling to catch up. ...

There has been more progress on the theoretical front than I expected, particularly in adding financial sector frictions to the NK-DSGE framework and in overcoming the restrictions imposed by the representative agent model. At the same time, there has been less progress than I expected in developing alternatives to the standard models. As far as I can tell, a serious challenge to the standard model has not yet appeared. My biggest disappointment is how much resistance there has been to the idea that we need to even try to find alternative modeling structures that might do better than those in use now, and the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Sunday, May 17, 2015

'Blaming Keynes'

Simon Wren-Lewis:

Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...

'Ed Prescott is No Robert Solow, No Gary Becker'

Paul Romer continues his assault on "mathiness":

Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.

There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.

Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.

What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?

The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...

[He goes on to give more details using examples from the papers.]

Friday, May 15, 2015

'Mathiness in the Theory of Economic Growth'

Paul Romer:

My Paper “Mathiness in the Theory of Economic Growth”: I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website. A short version of the supporting appendix is available here. It should eventually be available on the AER website but has not been posted yet. A longer version with more details behind the calculations is available here.

The point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.

The alternative to science is academic politics, where persistent disagreement is encouraged as a way to create distinctive sub-group identities.

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning.

I focus on mathiness in growth models because growth is the field I know best, one that gave me a chance to observe closely the behavior I describe. ...

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. ... Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

[I cut quite a bit -- see the full post for more.]

Saturday, May 02, 2015

'Needed: New Economic Frameworks for a Disappointing New Normal'

Brad DeLong ends a post on the need for "New Economic Frameworks for a Disappointing New Normal" with:

... Our government, here in the U.S. at least, has been starved of proper funding for infrastructure of all kinds since the election of Ronald Reagan. Our confidence in our institutions’ ability to manage aggregate demand properly is in shreds–and for the good reason of demonstrated incompetence and large-scale failure. Our political system now has a bias toward austerity and idle potential workers rather than toward expansion and inflation. Our political system now has a bias away from desirable borrow-and-invest. And the equity return premium is back to immediate post-Great Depression levels–and we also have an enormous and costly hypertrophy of the financial sector that is, as best as we can tell, delivering no social value in exchange for its extra size.
We badly need a new framework for thinking about policy-relevant macroeconomics given that our new normal is as different from the late-1970s as that era’s normal was different from the 1920s, and as that era’s normal was different from the 1870s.
But I do not have one to offer.

Friday, April 24, 2015

'Unit Roots, Redux'

John Cochrane weighs in on the discussion of unit roots:

Unit roots, redux: Arnold Kling's askblog and Roger Farmer have a little exchange on GDP and unit roots. My two cents here.
I did a lot of work on this topic a long time ago, in How Big is the Random Walk in GNP?  (the first one)  Permanent and Transitory Components of GNP and Stock Prices” (The last, and I think best one) "Multivariate estimates" with Argia Sbordone, and "A critique of the application of unit root tests", particularly appropriate to Roger's battery of tests.
The conclusions, which I still think hold up today:
Log GDP has both random walk and stationary components. Consumption is a pretty good indicator of the random walk component. This is also what the standard stochastic growth model predicts: a random walk technology shock induces a random walk component in output but there are transitory dynamics around that value.
A linear trend in GDP is only visible ex-post, like a "bull" or "bear" market.  It's not "wrong" to detrend GDP, but it is wrong to forecast that GDP will return to the linear trend or to take too seriously correlations of linearly detrended series, as Arnold mentions. Treating macro series as cointegrated with one common trend is a better idea.
Log stock prices have random walk and stationary components. Dividends are a pretty good indicator of the random walk component. (Most recently, here.) ...
Both Arnold and Roger claim that unemployment has a unit root. Guys, you must be kidding. ...

He goes on to explain.

Tuesday, April 21, 2015

'Rethinking Macroeconomic Policy'

Olivier Blanchard at Vox EU:

Rethinking macroeconomic policy: Introduction, by Olivier Blanchard: On 15 and 16 April 2015, the IMF hosted the third conference on “Rethinking Macroeconomic Policy”. I had initially chosen as the title and subtitle “Rethinking Macroeconomic Policy III. Down in the trenches”.1 I thought of the first conference in 2011 as having identified the main failings of previous policies, the second conference in 2013 as having identified general directions, and this conference as a progress report.
My subtitle was rejected by one of the co-organisers, namely Larry Summers. He argued that I was far too optimistic, that we were nowhere close to knowing where were going. Arguing with Larry is tough, so I chose an agnostic title, and shifted to “Rethinking Macro Policy III. Progress or confusion?”
Where do I think we are today? I think both Larry and I are right. I do not say this for diplomatic reasons. We are indeed proceeding in the trenches. But where the trenches are eventually going remains unclear. This is the theme I shall develop in my remarks, focusing on macroprudential tools, monetary policy, and fiscal policy.

Continue reading "'Rethinking Macroeconomic Policy'" »

Saturday, April 18, 2015

NBER Annual Conference on Macroeconomics: Abstracts for Day Two

First paper:

Declining Desire to Work and Downward Trends in Unemployment and Participation, by Regis Barnichon and Andrew Figura: Abstract The US labor market has witnessed two apparently unrelated trends in the last 30 years: a decline in unemployment between the early 1980s and the early 2000s, and a decline in labor force participation since the early 2000s. We show that a substantial factor behind both trends is a decline in desire to work among individuals outside the labor force, with a particularly strong decline during the second half of the 90s. A decline in desire to work lowers both the unemployment rate and the participation rate, because a nonparticipant who wants to work has a high probability to join the unemployment pool in the future, while a nonparticipant who does not want to work has a low probability to ever enter the labor force. We use cross-sectional variation to estimate a model of non-participants' propensity to want a job, and we find that changes in the provision of welfare and social insurance, possibly linked to the mid-90s welfare reforms, explain about 50 percent of the decline in desire to work.

Second paper:

External and Public Debt Crises, by Cristina Arellano, Andrew Atkeson, and Mark Wright: Abstract In recent years, the members of two advanced monetary and economic unions -- the nations of the Eurozone and the states of the United States of America -- experienced debt crises with spreads on government borrowing rising dramatically. Despite the similar behavior of spreads on public debt, these crises were fundamentally different in nature. In Europe, the crisis occurred after a period of significant increases in government indebtedness from levels that were already substantial, whereas in the USA state government borrowing was limited and remained roughly unchanged. Moreover, whereas the most troubled nations of Europe experienced a sudden stop in private capital flows and private sector borrowers also faced large rises in spreads, there is little evidence that private borrowing in US states was differentially affected by the creditworthiness of state governments. In this sense, we can say that the US States experienced a public debt crisis , whereas the nations of Europe experienced an external debt crisis affecting both public and private borrowers. Why did Europe experience an external debt crisis and the US States only a public debt crisis? And, why did the members of other economic unions, such as the provinces of Canada, not experience a debt crisis at all despite high and rising provincial public debt levels? In this paper, we construct a model of default on domestic and external public debt and interference in private external debt contracts and use it to argue that these different debt experiences result from the interplay of differences in the ability of governments to interfere in the private external debt contracts of their citizens, with differences in the flexibility of state fiscal institutions. We also assemble a range of empirical evidence that suggests that the US States are less fiscally flexible but more constrained in their ability to interfere in private contracts than the members of other economic unions, which simultaneously exposes the states to public debt crises while insulating them from an external debt crisis affecting private sector borrowers within the state. In contrast, Eurozone nations are more fiscally flexible but have a greater ability to interfere with the contracts, which together allow for more public borrowing at the cost of a joint public and private external debt crisis. Lastly, Canadian provincial governments are both fiscally flexible and limited in their ability to interfere, which allows both for more public borrowing and limits the likelihood of either a public or external debt crisis occurring. We draw lessons from these findings for the future design of Eurozone economic and legal institutions.

Friday, April 17, 2015

NBER Annual Conference on Macroeconomics: Abstracts for Day One

First paper at the NBER Annual Conference on Macroeconomics

Expectations and Investment, by Nicola Gennaioli, Yueran Ma, and Andrei Shleifer: Abstract Using micro data from Duke University quarterly survey of Chief Financial Officers, we show that corporate investment plans as well as actual investment are well explained by CFOs’ expectations of earnings growth. The information in expectations data is not subsumed by traditional variables, such as Tobin’s Q or discount rates. We also show that errors in CFO expectations of earnings growth are predictable from past earnings and other data, pointing to extrapolative structure of expectations and suggesting that expectations may not be rational . This evidence, like earlier findings in finance, points to the usefulness of data on actual expectations for understanding economic behavior.

Second paper:

Trends and Cycles in China's Macroeconomy, by Chun Chang, Kaji Chen, Daniel Waggoner, and Tao Zha: Abstract We make three contributions in this paper. First, we provide a core of macroeconomic time series usable for systematic research on China. Second, we document, through various empirical methods, the robust findings about striking patterns of trend and cycle. Third, we build a theoretical model that accounts for these facts. The model's mechanism and assumptions are corroborated by institutional details, disaggregated data, and banking time series, all of which are distinctive of Chinese characteristics. The departure of our theoretical model from standard ones offers a constructive framework for studying China's macroeconomy.

Third paper:

Demystifying the Chinese Housing Boom, byHanming Fang, Quanlin Gu, Wei Xiong, and Li-An Zhou: Abstract We construct housing price indices for 120 major cities in China in 2003 - 2013 based on sequential sales of new homes within the same housing developments. By using these indices and detailed information on mortgage borrowers across these cities, we find enormous housing price appreciation during the decade, which was accompanied by equally impressive growth in household income, except in a few first-tier cities. Housing market participation by households from the low-income fraction of the urban population remained steady. Nevertheless, bottom-income mortgage borrowers endured severe financial burdens by using price-to-income ratios over eight to buy homes, which reflected their expectations of persistently high income growth into the future. Such future income expectations could contract substantially in the event of a sudden stop in the Chinese economy and present an important source of risk to the housing market.

Fourth paper:

Networks and the Macroeconomy: An Empirical Exploration, by Daron Acemoglu, Ufuk Akcigit, and William Kerr: Abstract The propagation of macroeconomic shocks through input-output and geographic networks can be a powerful driver of macroeconomic fluctuations. We first exposit that in the presence of Cobb-Douglas production functions and consumer preferences, there is a specific pattern of economic transmission whereby demand-side shocks propagate upstream (to input supplying industries) and supply-side shocks propagate downstream (to customer industries) and that there is a tight relationship between the direct impact of a shock and the magnitudes of the downstream and the upstream indirect effects. We then investigate the short-run propagation of four different types of industry-level shocks: two demand-side ones (the exogenous component of the variation in industry imports from China and changes in federal spending) and two supply-side ones (TFP shocks and variation in knowledge/ideas coming from foreign patent- ing). In each case, we find substantial propagation of these shocks through the input-output network, with a pattern broadly consistent with theory. Quantitatively, the network-based propagation is larger than the direct effects of the shocks, sometimes by several fold. We also show quantitatively large effects from the geographic network, capturing the fact that the local propagation of a shock to an industry will fall more heavily on other industries that tend to collocate with it across local markets. Our results suggest that the transmission of various different types of shocks through economic networks and industry inter-linkages could have first-order implications for the macroeconomy.

Thursday, April 16, 2015

Video: Rethinking Macro Policy

Rethinking Macro Policy III: Session 3. Monetary Policy in the Future
Chair: José Viñals, Ben Bernanke, Gill Marcus, John Taylor


Rethinking Macro Policy: Session 4. Fiscal Policy in the Future
Chair: Vitor Gaspar, Marco Buti, Martin Feldstein, Brad DeLong,

Monday, April 13, 2015

In Defense of Modern Macroeconomic Theory

A small part of a much longer post from David Andolfatto (followed by some comments of my own):

In defense of modern macro theory: The 2008 financial crisis was a traumatic event. Like all social trauma, it invoked a variety of emotional responses, including the natural (if unbecoming) human desire to find someone or something to blame. Some of the blame has been directed at segments of the economic profession. It is the nature of some of these criticisms that I'd like to talk about today. ...
The dynamic general equilibrium (DGE) approach is the dominant methodology in macro today. I think this is so because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility--a property that permits economists of all political persuasions to make use of the apparatus. ...

The point I want to make here is not that the DGE approach is the only way to go. I am not saying this at all. In fact, I personally believe in the coexistence of many different methodologies. The science of economics is not settled, after all. The point I am trying to make is that the DGE approach is not insensible (despite the claims of many critics who, I think, are sometimes driven by non-scientific concerns). ...

Once again (lest I be misunderstood, which I'm afraid seems unavoidable these days) I am not claiming that DGE is the be-all and end-all of macroeconomic theory. There is still a lot we do not know and I think it would be a good thing to draw on the insights offered by alternative approaches. I do not, however, buy into the accusation that there "too much math" in modern theory. Math is just a language. Most people do not understand this language and so they have a natural distrust of arguments written in it. .... Before criticizing, either learn the language or appeal to reliable translations...

As for the teaching of macroeconomics, if the crisis has led more professors to pay more attention to financial market frictions, then this is a welcome development. I also fall in the camp that stresses the desirability of teaching more economic history and placing greater emphasis on matching theory with data. ... Thus, one could reasonably expect a curriculum to be modified to include more history, history of thought, heterodox approaches, etc. But this is a far cry from calling for the abandonment of DGE theory. Do not blame the tools for how they were (or were not) used.

I've said a lot of what David says about modern macroeconomic models at one time or another in the past, for example it's not the tools of macroeconomics, it's how they are used. But I do think he leaves out one important factor, the need to ask the right question (and why we didn't prior to the crisis). This is from August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. We ... have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. ...

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices. ...

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

Here's an interesting quote from Thomas Sargent along the same lines:

The criticism of real business cycle models and their close cousins, the so-called New Keynesian models, is misdirected and reflects a misunderstanding of the purpose for which those models were devised.6 These models were designed to describe aggregate economic fluctuations during normal times when markets can bring borrowers and lenders together in orderly ways, not during financial crises and market breakdowns.

Which to me is another way of saying we didn't foresee the need to ask questions (and build models) that would be useful in a financial crisis -- we were focused on models that would explain "normal times" (which is connected to the fact that we thought the Great Moderation would continue due to arrogance on behalf of economists leading to the belief that modern policy tools, particularly from the Fed, would prevent major meltdowns, financial or otherwise). That is happening now, so we'll be much more prepared if history repeats itself, but I have to wonder what other questions we should be asking, but aren't.

Let me add one more thing (a few excerpts from a post in 2010) about the sociology within economics:

I want to follow up on the post highlighting attempts to attack the messengers -- attempts to discredit Brad DeLong and Paul Krugman on macroeconomic policy in particular -- rather than engage academically with the message they are delivering (Krugman's response). ...
One of the objections often raised is that Krugman and DeLong are not, strictly speaking, macroeconomists. But if Krugman, DeLong, and others are expressing the theoretical and empirical results concerning macroeconomic policy accurately, does it really matter if we can strictly classify them as macroeconomists? Why is that important except as an attempt to discredit the message they are delivering? ... Attacking people rather than discussing ideas avoids even engaging on the issues. And when it comes to the ideas -- here I am talking most about fiscal policy -- as I've already noted in the previous post, the types of policies Krugman, DeLong, and others have advocated (and I should include myself as well) can be fully supported using modern macroeconomic models. ...
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.

Tuesday, April 07, 2015

In Search of Better Macroeconomic Models

I have a new column:

In Search of Better Macroeconomic Models: Modern macroeconomic models did not perform well during the Great Recession. What needs to be done to fix them? Can the existing models be patched up, or are brand new models needed? ...

It's mostly about the recent debate on whether we need microfoundations in macroeconomics.

Saturday, April 04, 2015

'Do not Underestimate the Power of Microfoundations'

Simon Wren-Lewis takes a shot at answering Brad DeLong's question about microfoundations:

Do not underestimate the power of microfoundations: Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. ... Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”...
Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency. ...
 I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However..., it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.
So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. ... In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.
I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. ... That makes discussion difficult, but I’m not sure it makes it impossible.

Saturday, March 28, 2015

'Unreal Keynesians'

Paul Krugman:

Unreal Keynesians: Brad DeLong points me to Lars Syll declaring that I am not a “real Keynesian”, because I use equilibrium models and don’t emphasize the instability of expectations. ...
I don’t care whether Hicksian IS-LM is Keynesian in the sense that Keynes himself would have approved of it, and neither should you. What you should ask is whether that approach has proved useful — and whether the critics have something better to offer.
And as I have often argued, these past 6 or 7 years have in fact been a triumph for IS-LM. Those of us using IS-LM made predictions about the quiescence of interest rates and inflation that were ridiculed by many on the right, but have been completely borne out in practice. We also predicted much bigger adverse effects from austerity than usual because of the zero lower bound, and that has also come true. ...

Wednesday, March 25, 2015

'Anti-Keynesian Delusions'

Paul Krugman continues the discussion on the use of the Keynesian model:

Anti-Keynesian Delusions: I forgot to congratulate Mark Thoma on his tenth blogoversary, so let me do that now. ...
Today Mark includes a link to one of his own columns, a characteristically polite and cool-headed response to the latest salvo from David K. Levine. Brad DeLong has also weighed in, less politely.
I’d like to weigh in with a more general piece of impoliteness, and note a strong empirical regularity in this whole area. Namely, whenever someone steps up to declare that Keynesian economics is logically and empirically flawed, has been proved wrong and refuted, you know what comes next: a series of logical and empirical howlers — crude errors of reasoning, assertions of fact that can be checked and rejected in a minute or two.
Levine doesn’t disappoint. ...

He goes on to explain in detail.

Update: Brad DeLong also comments.

Tuesday, March 24, 2015

'Macro Wars: The Attack of the Anti-Keynesians'

I have a new column:

Macro Wars: The Attack of the Anti-Keynesians, by Mark Thoma: The ongoing war between the Keynesians and the anti-Keynesians appears to be heating up again. The catalyst for this round of fighting is The Keynesian Illusion by David K. Levine, which elicited responses such as this and this from Brad DeLong and Nick Rowe.
The debate is about the source of economic fluctuations and the government’s ability to counteract them with monetary and fiscal policy. One of the issues is the use of “old fashioned” Keynesian models – models that have supposedly been rejected by macroeconomists in favor of modern macroeconomic models – to explain and understand the Great Recession and to make monetary and fiscal policy recommendations. As Levine says, “Robert Lucas, Edward Prescott, and Thomas Sargent … rejected Keynesianism because it doesn't work… As it happens we have developed much better theories…”
I believe the use of “old-fashioned” Keynesian models to analyze the Great Recession can be defended. ...

Monday, March 23, 2015

Paul Krugman: This Snookered Isle

Mediamacro:

This Snookered Isle, by Paul Krugman, Commentary, NY Times: The 2016 election is still 19 mind-numbing, soul-killing months away. There is, however, another important election in just six weeks, as Britain goes to the polls. And many of the same issues are on the table.
Unfortunately, economic discourse in Britain is dominated by a misleading fixation on budget deficits. Worse, this bogus narrative has infected supposedly objective reporting; media organizations routinely present as fact propositions that are contentious if not just plain wrong.
Needless to say, Britain isn’t the only place where things like this happen. A few years ago, at the height of our own deficit fetishism, the American news media showed some of the same vices. ... Reporters would drop all pretense of neutrality and cheer on proposals for entitlement cuts.
In the United States, however, we seem to have gotten past that. Britain hasn’t.
The narrative I’m talking about goes like this: In the years before the financial crisis, the British government borrowed irresponsibly... As a result, by 2010 Britain was at imminent risk of a Greek-style crisis; austerity policies, slashing spending in particular, were essential. And this turn to austerity is vindicated by Britain’s low borrowing costs, coupled with the fact that the economy, after several rough years, is now growing quite quickly.
Simon Wren-Lewis of Oxford University has dubbed this narrative “mediamacro.” As his coinage suggests, this is what you hear all the time on TV and read in British newspapers, presented not as the view of one side of the political debate but as simple fact.
Yet none of it is true. ...
Given all this, you might wonder how mediamacro gained such a hold on British discourse. Don’t blame economists. ... This media orthodoxy has become entrenched despite, not because of, what serious economists had to say.
Still, you can say the same of Bowles-Simpsonism in the United States... It was all about posturing, about influential people believing that pontificating about the need to make sacrifices — or, actually, for other people to make sacrifices — is how you sound wise and serious. ...
As I said, in the United States we have mainly gotten past that, for a variety of reasons — among them, I suspect, the rise of analytical journalism, in places like The Times’s The Upshot. But Britain hasn’t; an election that should be about real problems will, all too likely, be dominated by mediamacro fantasies.

Wednesday, March 18, 2015

'Is the Walrasian Auctioneer Microfounded?'

Simon Wren-Lewis (he says this is "For macroeconomists"):

Is the Walrasian Auctioneer microfounded?: I found this broadside against Keynesian economics by David K. Levine interesting. It is clear at the end that he is child of the New Classical revolution. Before this revolution he was far from ignorant of Keynesian ideas. He adds: “Knowledge of Keynesianism and Keynesian models is even deeper for the great Nobel Prize winners who pioneered modern macroeconomics - a macroeconomics with people who buy and sell things, who save and invest - Robert Lucas, Edward Prescott, and Thomas Sargent among others. They also grew up with Keynesian theory as orthodoxy - more so than I. And we rejected Keynesianism because it doesn't work not because of some aesthetic sense that the theory is insufficiently elegant.”
The idea is familiar: New Classical economists do things properly, by founding their analysis in the microeconomics of individual production, savings and investment decisions. [2] It is no surprise therefore that many of today’s exponents of this tradition view their endeavour as a natural extension of the Walrasian General Equilibrium approach associated with Arrow, Debreu and McKenzie. But there is one agent in that tradition that is as far from microfoundations as you can get: the Walrasian auctioneer. It is this auctioneer, and not people, who typically sets prices. ...
Now your basic New Keynesian model contains a huge number of things that remain unrealistic or are just absent. However I have always found it extraordinary that some New Classical economists declare such models as lacking firm microfoundations, when these models at least try to make up for one area where RBC models lack any microfoundations at all, which is price setting. A clear case of the pot calling the kettle black! I have never understood why New Keynesians can be so defensive about their modelling of price setting. Their response every time should be ‘well at least it’s better than assuming an intertemporal auctioneer’.[1] ...
As to the last sentence in the quote from Levine above, I have talked before about the assertion that Keynesian economics did not work, and the implication that RBC models work better. He does not talk about central banks, or monetary policy. If he had, he would have to explain why most of the people working for them seem to believe that New Keynesian type models are helpful in their job of managing the economy. Perhaps these things are not mentioned because it is so much easier to stay living in the 1980s, in those glorious days (for some) when it appeared as if Keynesian economics had been defeated for good.

'Arezki, Ramey, and Sheng on News Shocks'

I was at this conference as well. This paper was very well received (it has been difficult to find evidence that news generates business cycles, in part because it's been difficult to find a "clean" shock):

Arezki, Ramey, and Sheng on news shocks: I attended the NBER EFG (economic fluctuations and growth) meeting a few weeks ago, and saw a very nice paper by Rabah Arezki, Valerie Ramey, and Liugang Sheng, "News Shocks in Open Economies: Evidence from Giant Oil Discoveries" (There were a lot of nice papers, but this one is more bloggable.)

They look at what happens to economies that discover they have a lot of oil. ... An oil discovery is a well identified "news shock."

Standard productivity shocks are a bit nebulous, and alter two things at once: they give greater productivity and hence incentive to work today and also news about more income in the future.

An oil discovery is well publicized. It incentivizes a small investment in oil drilling, but mostly is pure news of an income flow in the future. It does not affect overall labor productivity or other changes to preferences or technology.
Rabah,Valerie, and Liugang then construct a straightforward macro model of such an event. ...[describes model and results]...

Valerie, presenting the paper, was a bit discouraged. This "news shock" doesn't generate a pattern that looks like standard recessions, because GDP and employment go in the opposite direction.

I am much more encouraged. Here are macroeconomies behaving exactly as they should, in response to a shock where for once we really know what the shock is. And in response to a shock with a nice dynamic pattern, which we also really understand.

My comment was something to the effect of "this paper is much more important than you think. You match the dynamic response of economies to this large and very well identified shock with a standard, transparent and intuitive neoclassical model. Here's a list of some of the ingredients you didn't need: Sticky prices, sticky wages, money, monetary policy, (i.e. interest rates that respond via a policy rule to output and inflation or zero bounds that stop them from doing so), home bias, segmented financial markets, credit constraints, liquidity constraints, hand-to-mouth consumers, financial intermediation, liquidity spirals, fire sales, leverage, sudden stops, hot money, collateral constraints, incomplete markets, idiosyncratic risks, strange preferences including habits, nonexpected utility, ambiguity aversion, and so forth, behavioral biases, nonexpected utility, or rare disasters. If those ingredients are really there, they ought to matter for explaining the response to your shocks too. After all, there is only one economic structure, which is hit by many shocks. So your paper calls into question just how many of those ingredients are really there at all."

Thomas Phillipon, whose previous paper had a pretty masterful collection of a lot of those ingredients, quickly pointed out my overstatement. One needs not need every ingredient to understand every shock. Constraint variables are inequalities. A positive news shock may not cause credit constraints etc. to bind, while a negative shock may reveal them.

Good point. And really, the proof is in the pudding. If those ingredients are not necessary, then I should produce a model without them that produces events like 2008. But we've been debating the ingredients and shock necessary to explain 1932 for 82 years, so that approach, though correct, might take a while.

In the meantime, we can still cheer successful simple models and well identified shocks on the few occasions that they appear and fit data so nicely. Note to graduate students, this paper is a really nice example to follow for its integration of clear theory and excellent empirical work.

Saturday, March 14, 2015

'John and Maynard’s Excellent Adventure'

Paul Krugman defends IS-LM analysis (I'd make one qualification. Models are built to answer specific questions, we do not have one grand unifying model to use for all questions. IS-LM models were built to answer exactly the kinds of questions we encountered during the Great Recession, and the IS-LM model provided good answers (especially if one remembers where the model encounters difficulties). DSGE models were built to address other issues, and it's not surprising they didn't do very well when they were pushed to address questions they weren't designed to answer. The best model to use depends upon the question one is asking):

John and Maynard’s Excellent Adventure: When I tell people that macroeconomic analysis has been triumphantly successful in recent years, I tend to get strange looks. After all, wasn’t everyone predicting lots of inflation? Didn’t policymakers get it all wrong? Haven’t the academic economists been squabbling nonstop?
Well, as a card-carrying economist I disavow any responsibility for Rick Santelli and Larry Kudlow; I similarly declare that Paul Ryan and Olli Rehn aren’t my fault. As for the economists’ disputes, well, let me get to that in a bit.
I stand by my claim, however. The basic macroeconomic framework that we all should have turned to, the framework that is still there in most textbooks, performed spectacularly well: it made strong predictions that people who didn’t know that framework found completely implausible, and those predictions were vindicated. And the framework in question – basically John Hicks’s interpretation of John Maynard Keynes – was very much the natural way to think about the issues facing advanced countries after 2008. ...
I call this a huge success story – one of the best examples in the history of economics of getting things right in an unprecedented environment.
The sad thing, of course, is that this incredibly successful analysis didn’t have much favorable impact on actual policy. Mainly that’s because the Very Serious People are too serious to play around with little models; they prefer to rely on their sense of what markets demand, which they continue to consider infallible despite having been wrong about everything. But it also didn’t help that so many economists also rejected what should have been obvious.
Why? Many never learned simple macro models – if it doesn’t involve microfoundations and rational expectations, preferably with difficult math, it must be nonsense. (Curiously, economists in that camp have also proved extremely prone to basic errors of logic, probably because they have never learned to work through simple stories.) Others, for what looks like political reasons, seemed determined to come up with some reason, any reason, to be against expansionary monetary and fiscal policy.
But that’s their problem. From where I sit, the past six years have been hugely reassuring from an intellectual point of view. The basic model works; we really do know what we’re talking about.

[The original is quite a bit longer.]

Thursday, March 05, 2015

'Economists' Biggest Failure'

Noah Smith:

Economists' Biggest Failure: One of the biggest things that economists get grief about is their failure to predict big events like recessions. ... 
Pointing this out usually leads to the eternal (and eternally fun) debate over whether economics is a real science. The profession's detractors say that if you don’t make successful predictions, you aren’t a science. Economists will respond that seismologists can’t forecast earthquakes, and meteorologists can’t forecast hurricanes, and who cares what’s really a “science” anyway. 
The debate, however, misses the point. Forecasts aren’t the only kind of predictions a science can make. In fact, they’re not even the most important kind. 
Take physics for example. Sometimes physicists do make forecasts -- for example, eclipses. But those are the exception. Usually, when you make a new physics theory, you use it to predict some new phenomenon... For example, quantum mechanics has gained a lot of support from predicting the strange new things like quantum tunneling or quantum teleportation.
Other times, a theory will predict things we have seen before, but will describe them in terms of other things that we thought were totally separate, unrelated phenomena. This is called unification, and it’s a key part of what philosophers think science does. For example, the theory of electromagnetism says that light, electric current, magnetism, radio waves are all really the same phenomenon. Pretty neat! ...
So that’s physics. What about economics? Actually, econ has a number of these successes too. When Dan McFadden used his Random Utility Model to predict how many people would ride San Francisco's Bay Area Rapid Transit system,... he got it right. And he got many other things right with the same theory -- it wasn’t developed to explain only train ridership. 
Unfortunately, though, this kind of success isn't very highly regarded in the economics world... Maybe now, with the ascendance of empirical economics and a decline in theory, we’ll see a focus on producing fewer but better theories, more unification, and more attempts to make novel predictions. Someday, maybe macroeconomists will even be able to make forecasts! But let’s not get our hopes up.

I've addressed this question many times, e.g. in 2009, and to me the distinction is between forecasting the future, and understanding why certain phenomena occur (re-reading, it's a bit repetitive):

Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. On prediction in economics, perhaps someday improved models will allow us to do better than we have so far at predicting the exact timing of crises, and I think that earthquakes provide some guidance here. You have to ask first if stress is building in a particular sector, and then ask if action needs to be taken because the stress has reached dangerous levels, levels that might result in a big crash rather than a series of small stress relieving adjustments. I don't think our models are very good at detecting accumulating stress...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur. 
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place.
We haven't failed at this entirely though. For example, we have had some success at putting safeguards into place before shocks occur, automatic stabilizers have done a lot to insulate against the negative consequences of the recession (though they could have been larger to stop the building from swaying as much as it has). So it's not proper to say that our models have not helped us to prepare in advance at all, the insulation social insurance programs provide is extremely important to recognize. But it is the case that we could have and should have done better at preparing before the shock hit.
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.

Monday, January 26, 2015

'Does Monopoly Power Cause Inflation? (1968 and all that)'

Nick Rowe:

Does monopoly power cause inflation? (1968 and all that): Here's a question for you: Suppose there is a permanent increase in monopoly power across the economy (either firms having more monopoly power in output markets, or unions having more monopoly power in labour markets). Would that permanent increase in monopoly power cause a permanent increase in the inflation rate?
Most economists today would answer "no" to that question. It might maybe cause a temporary once-and-for-all rise in the price level, but it would not cause a permanent increase in the inflation rate. The question just sounds strange to modern economists' ears. They would much prefer to discuss whether a permanent increase in monopoly power caused a permanent reduction in real output and employment. What has monopoly power got to do with inflation?
To economists 40 or 50 years ago, the question would not have sounded strange at all. Many (maybe most?) economists would have answered "yes" to that question. ...

 

Saturday, January 10, 2015

'Orthodoxy, Heterodoxy, and Ideology'

Paul Krugman:

Orthodoxy, Heterodoxy, and Ideology: Many economists responded badly to the economic crisis. And there’s a lot wrong with mainstream economic analysis. But how closely are these two assertions related? Not as much as you might think. So I’m very much in accord with Simon Wren-Lewis on the remarkable unhelpfulness of recent heterodox assaults on the field. Not that there’s anything wrong with being heterodox in general; but a lot of what we’ve been seeing misidentifies the problem, and if anything gives aid and comfort to the wrong people.
The point is that standard macroeconomics does NOT justify the attacks on fiscal stimulus and the embrace of austerity. On these issues, people like Simon and myself have been following well-established models and analyses, while the austerians have been making up new stuff and/or rediscovering old fallacies to justify the policies they want. Formal modeling and quantitative analysis doesn’t justify the austerian position; on the contrary, austerians had to throw out the models and abandon statistical principles to justify their claims.
Let’s look at several examples. ...

See also Chris Dillow: Heterodox economics & the left.

It's remarkable how many people rejected the conclusions of *modern* macroeconomic models (or invented nonsense) in order to oppose fiscal policy. It seemed to have more to do with ideology (the government can't possible help no matter what the model says...) and identification (I'm a serious macroeconomist, don't lump me in with all those old fashioned Keynesian hippie types) than with standard macroeconomic analysis.

On this point, see Simon Wren-Lewis: Faith based macroeconomics.

Thursday, December 18, 2014

'What’s the Matter with Economics?': An Exchange

Arnold Packer and Jeff Madrick respond to Alan Blinder in the NYRB, and he replies:

‘What’s the Matter with Economics?’: An Exchange: In response to: What’s the Matter with Economics? from the December 18, 2014 issue ...
To the Editors:
Alan Blinder is one of the finest mainstream economists around. But to read his review of my book, you’d think that nothing was wrong with economics in recent decades except as it is practiced by a few right-wingers.
This is of course not the case. ...
Jeff Madrick
New York City
Alan S. Blinder replies:
According to both Jeff Madrick and Arnie Packer, I claim “that except for some right-wingers outside the ‘mainstream’…little is the matter” with economics. (These are Packer’s words; Madrick’s are similar.) But it’s not true. I think there is lots wrong with mainstream economics.
For starters, my review explicitly agreed with Madrick that (a) ideological predispositions infect economists’ conclusions far too much; (b) economics has drifted to the right (along with the American body politic); and (c) some economists got carried away by the allure of the efficient markets hypothesis. I also added a few indictments of my own: that we economists have failed to convey even the most basic economic principles to the public; and that some of our students turned Adam Smith’s invisible hand into Gordon Gekko’s “greed is good.” ...
Yet Madrick still insists that “economists rely on a fairly pure version of the invisible hand most of the time.” Not us mainstreamers. I’m a member of the tribe, I live among these people every day, and—trust me—we really don’t apply the “pure version” to the real world. For example, many of us see reasons for a minimum wage, mandatory Social Security, progressive taxation, carbon taxes, and a whole variety of financial regulations—to name just a few. ...

[Hard to summarize this one with a few excerpts -- I left a lot out...]

Sunday, December 14, 2014

Real Business Cycle Theory

Roger Farmer:

Real business cycle theory and the high school Olympics: I have lost count of the number of times I have heard students and faculty repeat the idea in seminars, that “all models are wrong”. This aphorism, attributed to George Box,  is the battle cry  of the Minnesota calibrator, a breed of macroeconomist, inspired by Ed Prescott, one of the most important and influential economists of the last century.
Of course all models are wrong. That is trivially true: it is the definition of a model. But the cry  has been used for three decades to poke fun at attempts to use serious econometric methods to analyze time series data. Time series methods were inconvenient to the nascent Real Business Cycle Program that Ed pioneered because the models that he favored were, and still are, overwhelmingly rejected by the facts. That is inconvenient. Ed’s response was pure genius. If the model and the data are in conflict, the data must be wrong. ...

After explaining, he concludes:

We don't have to play by Ed's rules. We can use the methods developed by Rob Engle and Clive Granger as I have done here. Once we allow aggregate demand to influence permanently the unemployment rate, the data do not look kindly on either real business cycle models or on the new-Keynesian approach. It's time to get serious about macroeconomic science...

Thursday, November 27, 2014

MarkSpeaks

Simon Wren-Lewis:

As Mark Thoma often says, the problem is with macroeconomists rather than macroeconomics.

Much, much more here.

Saturday, November 15, 2014

'The Unwisdom of Crowding Out'

Here's Paul Krugman's response to the Vox EU piece by Peter Temin and David Vines that I posted yesterday:

The Unwisdom of Crowding Out (Wonkish): I am, to my own surprise, not too happy with the defense of Keynes by Peter Temin and David Vines in VoxEU. Peter and David are of course right that Keynes has a lot to teach us, and are also right that the anti-Keynesians aren’t just making really bad arguments; they’re making the very same really bad arguments Keynes refuted 80 years ago.
But the Temin-Vines piece seems to conflate several different bad arguments under the heading of “Ricardian equivalence”, and in so doing understates the badness.
The anti-Keynesian proposition is that government spending to boost a depressed economy will fail, because it will lead to an equal or greater fall in private spending — it will crowd out investment and maybe consumption, and therefore accomplish nothing except a shift in who spends. But why do the AKs claim this will happen? I actually see five arguments out there — two (including the actual Ricardian equivalence argument) completely and embarrassingly wrong on logical grounds, three more that aren’t logical nonsense but fly in the face of the evidence.
Here they are...[explains all five]...

He ends with:

My point is that you do a disservice to the debate by calling all of these things Ricardian equivalence; and the nature of that disservice is that you end up making the really, really bad arguments sound more respectable than they are. We do not want to lose sight of the fact that many influential people, including economists with impressive CVs, responded to macroeconomic crisis with crude logical fallacies that reflected not just sloppy thinking but ignorance of history.

Tuesday, October 28, 2014

Are Economists Ready for Income Redistribution?

I have a new column:

Are Economists Ready for Income Redistribution?: When the Great Recession hit and it became clear that monetary policy alone would not be enough to prevent a severe, prolonged downturn, fiscal policy measures – a combination of tax cuts and new spending – were used to try to limit the damage to the economy. Unfortunately, macroeconomic research on fiscal policy was all but absent from the macroeconomics literature and, for the most part, policymakers were operating in the dark, basing decisions on what they believed to be true rather than on solid theoretical and empirical evidence.
Fiscal policy will be needed again in the future, either in a severe downturn or perhaps to address the problem of growing inequality, and macroeconomists must do a better job of providing the advice that policymakers need to make informed fiscal policy decisions. ...

The question of redistribution is coming, and we need to be ready when it does.

Tuesday, October 14, 2014

'The Mythical Phillips Curve?'

An entry in the ongoing debate over the Phillips curve:

The mythical Phillips curve?, by Simon Wren-Lewis, mainly macro: Suppose you had just an hour to teach the basics of macroeconomics, what relationship would you be sure to include? My answer would be the Phillips curve. With the Phillips curve you can go a long way to understanding what monetary policy is all about.
My faith in the Phillips curve comes from simple but highly plausible ideas. In a boom, demand is strong relative to the economy’s capacity to produce, so prices and wages tend to rise faster than in an economic downturn. However workers do not normally suffer from money illusion: in a boom they want higher real wages to go with increasing labour supply. Equally firms are interested in profit margins, so if costs rise, so will prices. As firms do not change prices every day, they will think about future as well as current costs. That means that inflation depends on expected inflation as well as some indicator of excess demand, like unemployment.
Microfoundations confirm this logic, but add a crucial point that is not immediately obvious. Inflation today will depend on expectations about inflation in the future, not expectations about current inflation. That is the major contribution of New Keynesian theory to macroeconomics. ...[turns to evidence]...

Is it this data which makes me believe in the Phillips curve? To be honest, no. Instead it is the basic theory that I discussed at the beginning of this post. It may also be because I’m old enough to remember the 1970s when there were still economists around who denied that lower unemployment would lead to higher inflation, or who thought that the influence of expectations on inflation was weak, or who thought any relationship could be negated by direct controls on wages and prices, with disastrous results. But given how ‘noisy’ macro data normally is, I find the data I have shown here pretty consistent with my beliefs.