Category Archive for: Methodology [Return to Main]

Thursday, August 27, 2015

'The Day Macroeconomics Changed'

Simon Wren-Lewis:

The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.

Wednesday, August 26, 2015

Ray Fair: The Future of Macro

Ray Fair:

The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...

He goes on to explain why. He concludes with:

... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.

Sunday, August 23, 2015

''Young Economists Feel They Have to be Very Cautious''

From an interview of Paul Romer in the WSJ:

...Q: What kind of feedback have you received from colleagues in the profession?

A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”

Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?

A: There was an interesting [2013] Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...

It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...

Friday, August 21, 2015

'Scientists Do Not Demonize Dissenters. Nor Do They Worship Heroes.'

Paul Romer's latest entry on "mathiness" in economics ends with:

Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.

[The reference to Joan Robinson is clarified in the full text.]

Monday, August 17, 2015

Stiglitz: Towards a General Theory of Deep Downturns

This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):

Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation.[2] As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained.[3] These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.[4]
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy.[5] It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...

He concludes with:

IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets.[66] Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.

Tuesday, August 11, 2015

Macroeconomics: The Roads Not Yet Taken

My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:

Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...

The Macroeconomic Divide

Paul Krugman:

Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...

Tuesday, August 04, 2015

'Sarcasm and Science'

On the road again, so just a couple of quick posts. This is Paul Krugman:

Sarcasm and Science: Paul Romer continues his discussion of the wrong turn of freshwater economics, responding in part to my own entry, and makes a surprising suggestion — that Lucas and his followers were driven into their adversarial style by Robert Solow’s sarcasm...
Now, it’s true that people can get remarkably bent out of shape at the suggestion that they’re being silly and foolish. ...
But Romer’s account of the great wrong turn still sounds much too contingent to me...
At least as I perceived it then — and remember, I was a grad student as much of this was going on — there were two other big factors.
First, there was a political component. Equilibrium business cycle theory denied that fiscal or monetary policy could play a useful role in managing the economy, and this was a very appealing conclusion on one side of the political spectrum. This surely was a big reason the freshwater school immediately declared total victory over Keynes well before its approach had been properly vetted, and why it could not back down when the vetting actually took place and the doctrine was found wanting.
Second — and this may be less apparent to non-economists — there was the toolkit factor. Lucas-type models introduced a new set of modeling and mathematical tools — tools that required a significant investment of time and effort to learn, but which, once learned, let you impress everyone with your technical proficiency. For those who had made that investment, there was a real incentive to insist that models using those tools, and only models using those tools, were the way to go in all future research. ...
And of course at this point all of these factors have been greatly reinforced by the law of diminishing disciples: Lucas’s intellectual grandchildren are utterly unable to consider the possibility that they might be on the wrong track.

Sunday, August 02, 2015

'Freshwater’s Wrong Turn'

Paul Krugman follows up on Paul Romer's latest attack on "mathiness":

Freshwater’s Wrong Turn (Wonkish): Paul Romer has been writing a series of posts on the problem he calls “mathiness”, in which economists write down fairly hard-to-understand mathematical models accompanied by verbal claims that don’t actually match what’s going on in the math. Most recently, he has been recounting the pushback he’s getting from freshwater macro types, who seem him as allying himself with evil people like me — whereas he sees them as having turned away from science toward a legalistic, adversarial form of pleading.
You can guess where I stand on this. But in his latest, he notes some of the freshwater types appealing to their glorious past, claiming that Robert Lucas in particular has a record of intellectual transparency that should insulate him from criticism now. PR replies that Lucas once was like that, but no longer, and asks what happened.
Well, I’m pretty sure I know the answer. ...

It's hard to do an extract capturing all the points, so you'll likely want to read the full post, but in summary:

So what happened to freshwater, I’d argue, is that a movement that started by doing interesting work was corrupted by its early hubris; the braggadocio and trash-talking of the 1970s left its leaders unable to confront their intellectual problems, and sent them off on the path Paul now finds so troubling.

Recent tweets, email, etc. in response to posts I've done on mathiness reinforce just how unwilling many are to confront their tribalism. In the past, I've blamed the problems in macro on, in part, the sociology within the profession (leading to a less than scientific approach to problems as each side plays the advocacy game) and nothing that has happened lately has altered that view.

Saturday, August 01, 2015

'Microfoundations 2.0?'

Daniel Little:

Microfoundations 2.0?: The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:

The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)

The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)
Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

So what are the recent challenges? First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In once sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

I've talked about this many times, e.g., but it's worth making this point about aggregating from individual agents to macroeconomic aggregates once again (it deals, for one, with the emergent properties objection above -- it's the reason representative agent models are used, it seems to avoid the aggregation issue). This is from Kevin Hoover:

... Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates that there is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.

Friday, July 31, 2015

Paul Romer: Freshwater Feedback on Mathiness

More from Paul Romer:

Freshwater Feedback Part 1: “Everybody does it”: You can boil my claim about mathiness down to two assertions:

1. Economist N did X.
2. X is wrong because it undermines the scientific method.

#1 is a positive assertion, a statement about “what is …”#2 is a normative assertion, a statement about “what ought …” As you would expect from an economist, the normative assertion in #2 is based on what I thought would be a shared premise: that the scientific method is a better way to determine what is true about economic activity than any alternative method, and that knowing what is true is valuable.

In conversations with economists who are sympathetic to the freshwater economists I singled out for criticism in my AEA paper on mathiness, it has become clear that freshwater economists do not share this premise. What I did not anticipate was their assertion that economists do not follow the scientific method, so it is not realistic or relevant to make normative statements of the form “we ought to behave like scientists.”

In a series of three posts that summarize what I have learned since publishing that paper, I will try to stick to positive assertions, that is assertions about the facts, concerning this difference between the premises that freshwater economists take for granted and the premises that I and other economists take for granted.

In my conversations, the freshwater sympathizers generally have not disagreed with my characterization of the facts in assertion #1–that specific freshwater economists did X. In their response, two themes recur:

a) Yes, but everybody does X; that is how the adversarial method works.
b) By selectively expressing disapproval of this behavior by the freshwater economists that you name, you, Paul, are doing something wrong because you are helping “those guys.”

In the rest of this post, I’ll address response a). In a subsequent post, I’ll address response b). Then in a third post, I’ll observe that in my AEA paper, I also criticized a paper by Piketty and Zucman, who are not freshwater economists. The response I heard back from them was very different from the response from the freshwater economists. In short, Piketty and Zucman disagreed with my statement that they did X, but they did not dispute my assertion that X would be wrong because it would be a violation of the scientific method.

Together, the evidence I summarize in these three posts suggests that freshwater economists differ sharply from other economists. This evidence strengthens my belief that the fundamental divide here is between the norms of political discourse and the norms of scientific discourse. Lawyers and politicians both engage in a version of the adversarial method, but they differ in another crucial way. In the suggestive terminology introduced by Jon Haidt in his book The Righteous Mind, lawyers are selfish, but politicians are groupish. What is distinctive about the freshwater economists is that their groupishness depends on a narrow definition of group that sharply separates them from all other economists. One unfortunate result of this narrow groupishness may be that the freshwater economists do not know the facts about how most economists actually behave. ...[continue]...

Wednesday, July 29, 2015

'Using Math to Obfuscate — Observations from Finance

More from Paul Romer on "mathiness" -- this time the use of math in finance to obfuscate communication with regulators:

Using Math to Obfuscate — Observations from Finance: The usual narrative suggests that the new mathematical tools of modern finance were like the wings that Daedalus gave Icarus. The people who put these tools to work soared too high and crashed.
In two posts, here and here, Tim Johnson notes that two government investigations (one in the UK, the other in the US) tell a different tale. People in finance used math to hide what they were doing.
One of the premises I used to take for granted was that an argument presented using math would be more precise than the corresponding argument presented using words. Under this model, words from natural language are more flexible than math. They let us refer to concepts we do not yet fully understand. They are like rough prototypes. Then as our understanding grows, we use math to give words more precise definitions and meanings. ...
I assumed that because I was trying to use math to reason more precisely and to communicate more clearly, everyone would use it the same way. I knew that math, like words, could be used to confuse a reader, but I assumed that all of us who used math operated in a reputational equilibrium where obfuscating would be costly. I expected that in this equilibrium, we would see only the use of math to clarify and lend precision.
Unfortunately, I was wrong even about the equilibrium in the academic world, where mathiness is in fact used to obfuscate. In the world of for-profit finance, the return to obfuscation in communication with regulators is much higher, so there is every reason to expect that mathiness would be used liberally, particularly in mandated disclosures. ...
We should expect that there will be mistakes in math, just as there are mistakes in computer code. We should also expect some inaccuracies in the verbal claims about what the math says. A small number of errors of either type should not be a cause for alarm, particularly if the math is presented transparently so that readers can check the math itself and check whether it aligns with the words. In contrast, either opaque math or ambiguous verbal statements about the math should be grounds for suspicion. ...
Mathiness–exposition characterized by a systematic divergence between what the words say and what the math implies–should be rejected outright.

Sunday, July 19, 2015

The Rivals (Samuelson and Friedman)

This is by David Warsh:

The Rivals, Economic Principals: When Keynes died, in April 1946, The Times of London gave him the best farewell since Nelson after Trafalgar: “To find an economist of comparable influence one would have to go back to Adam Smith.” A few years later, Alvin Hansen, of Harvard University, Keynes’ leading disciple in the United States, wrote , “It may be a little too early to claim that, along with Darwin’s Origin of Species and Marx’s Capital, The General Theory is one of the most significant book which have appeared in the last hundred years. … But… it continues to gain in importance.”
In fact, the influence of Keynes’ book, as opposed to the vision of “macroeconomics” at the heart of it, and the penumbra of fame surrounding it, already had begun its downward arc. Civilians continued to read the book, more for its often sparkling prose than for the clarity of its argument. Among economists, intermediaries and translators had emerged in various communities to explain the insights the great man had sought to convey. Speaking of the group in Cambridge, Massachusetts, Robert Solow put it this way, many years later: “We learned not as much from it – it was…almost unreadable – as from a number of explanatory articles that appeared on all our graduate school reading lists.”
Instead it was another book that ushered in an era of economics very different from the age before. Foundations of Economic Analysis, by Paul A. Samuelson, important parts of it written as much as ten years before, appeared in 1947. “Mathematics is a Language,” proclaimed its frontispiece; equations dominated nearly every page. “It might be still too early to tell how the discoveries of the 1930s would pan out,” Samuelson wrote delicately in the introduction, but their value could be ascertained only by expressing them in mathematical models whose properties could be thoroughly explored and tested. “The laborious literary working-over of essentially simple mathematical concepts such as is characteristic of much of modern economic theory is not only unrewarding from the standpoint of advancing the science, but involves as well mental gymnastics of a particularly depraved type.”
Foundations had won a prize as a dissertation, so Harvard University was required to publish it as a book. In Samuelson’s telling, the department chairman had to be forced to agree to printing a thousand copies, dragged his feet, and then permitted its laboriously hand-set plates to be melted down for other uses after 887 copies were run off. Thus Foundations couldn’t be revised in subsequent printings, until a humbled Harvard University Press republished an “enlarged edition” with a new introduction and a mathematical appendix in 1983. When Samuelson biographer Roger Backhouse went through the various archival records, he concluded that the delay could be explained by production difficulties and recycling of the lead type by postwar exigencies at Press.
It didn’t matter. With the profession, Samuelson soon would win the day.
The “new” economics that he represented – the earliest developments had commenced in the years after World War I – conquered the profession, high and low. The next year Samuelson published an introductory textbook, Economics, to inculcate the young. Macroeconomic theory was to be put to work to damp the business cycle and, especially, avoid the tragedy of another Great Depression. The new approach swiftly attracted a community away from alternative modes of inquiry, in the expectation that it would yield new solutions to the pressing problem of depression-prevention. Alfred Marshall’s Principles of Economics eventually would be swept completely off the table. Foundations was a paradigm in the Kuhnian sense.
At the very zenith of Samuelson’s success, another sort of book appeared, in 1962, A Monetary History of the United States, 1869-1960, by Milton Friedman and Anna Schwartz, published by the National Bureau of Economic Research. At first glance, the two books had nothing to do with one another. A Monetary History harkened back to approaches that had been displaced by Samuelsonian methods – “hypotheses” instead of theorems; charts instead of models, narrative, not econometric analytics. The volume did little to change the language that Samuelson had established. Indeed, economists at the University of Chicago, Friedman’s stronghold, were on the verge of adapting a new, still- higher mathematical style to the general equilibrium approach that Samuelson had pioneered.
Yet one interpretation of the relationship between the price system and the Daedalean wings that A Monetary History contained was sufficiently striking as to reopen a question thought to have been settled. A chapter of their book, “The Great Contraction,” contained an interpretation of the origins of the Great Depression that gradually came to overshadow the rest. As J. Daniel Hammond has written,
The “Great Contraction” marked a watershed in thinking about the greatest economic calamity in modern times. Until Friedman and Schwartz provoked the interest of economists by rehabilitating monetary history and theory, neither economic theorists nor economic historians devoted as much attention to the Depression as historians.
So you could say that some part of the basic agenda of the next fifty years was ordained by the rivalry that began in the hour that Samuelson and Friedman became aware of each other, perhaps in the autumn of 1932, when both turned up the recently-completed Social Science Research Building of the University of Chicago, at the bottom of the Great Depression. Excellent historians, with access to extensive archives, have been working on both men’s lives and work: Hammond, of Wake Forest University, has largely completed his project on Friedman; Backhouse, of the University of Birmingham, is finishing a volume on Samuelson’s early years. Neither author has yet come across a frank recollection by either man of those first few meetings. Let’s hope one or more second-hand accounts turn up in the papers of the many men and women who knew them then. When I asked Friedman about their relationship in 2005, he deferred to his wife, who, somewhat uncomfortably, mentioned a differential in privilege. I lacked the temerity to ask Samuelson directly the last couple of times we talked; he clearly didn’t enjoy discussing it.
Biography is no substitute for history, much less for theory and history of thought, and journalism is, at best, only a provisional substitute for biography. But one way of understanding what happened in economics in the twentieth century is to view it as an argument between Samuelson and Friedman that lasted nearly eighty years, until one aspect of it, at least, was resolved by the financial crisis of 2008. The departments of economics they founded in Cambridge and Chicago, headquarters in the long wars between the Keynesians and the monetarists, came to be the Athens and Sparta of their day. ...[continue reading]...

[There is much, much more in the full post.]

Tuesday, July 07, 2015

'Why Germany Wants Rid of Greece'

Simon Wren-Lewis:

Why Germany wants rid of Greece: When I recently visited Berlin, it quickly became clear the extent to which Germany had created a fantasy story about Greece. It was an image of Greeks as a privileged and lazy people, who kept on taking ‘bailouts’ while refusing to do anything to correct their situation. I heard this fantasy from talking to people who were otherwise well informed and knowledgeable about economics.
So powerful has this fantasy become, it is now driving German policy (and policy in a few other countries as well) in totally irrational ways. ... What is driving Germany’s desperate need to rid itself of the Greek problem? ...
 Germany is a country where the ideas of Keynes, and therefore mainstream macroeconomics in the rest of the world, are considered profoundly wrong and are described as ‘Anglo-Saxon economics’. Greece then becomes a kind of experiment to see which is right: the German view, or ‘Anglo-Saxon economics’.
The results of the experiment are not to Germany’s liking. ... Confronting this reality has been too much for Germany. So instead it has created its fantasy, a fantasy that allows it to cast its failed experiment to one side, blaming the character of the patient.
The only thing particularly German about this process is the minority status of Keynesian economics within German economic policy advice. In the past I have drawn parallels between what is going on here and the much more universal tendency for poverty to be explained in terms of the personal failings of the poor. These attempts to deflect criticism of economic systems are encouraged by political interests and a media that supports them, as we are currently seeing in the UK. So much easier to pretend that the problems of Greece lie with its people, or culture, or politicians, or its resistance to particular ‘structural reforms’, than to admit that Greece’s real problem is of your making.

Friday, July 03, 2015

'Behavioral Economics is Rational After All'

Roger Farmer:

Behavioral Economics is Rational After All: There are some deep and interesting issues involved in the debate over behavioral economics. ...
My point here, is that neoclassical economics can absorb the criticisms of the behaviourists without a major shift in its underlying assumptions. The 'anomalies' pointed out by psychologists are completely consistent with maximizing behaviour, as long as we do not impose any assumptions on the form of the utility function defined over goods that are dated and indexed by state of nature.
There is a deeper, more fundamental critique. If we assert that the form of the utility function is influenced by 'persuasion', then we lose the intellectual foundation for much of welfare economics. That is a much more interesting project that requires us to rethink what we mean by individualism. ...

Sunday, June 14, 2015

'What Assumptions Matter for Growth Theory?'

Dietz Vollrath explains the "mathiness" debate (and also Euler's theorem in a part of the post I left out). Glad he's interpreting Romer -- it's very helpful:

What Assumptions Matter for Growth Theory?: The whole “mathiness” debate that Paul Romer started tumbled onwards this week... I was able to get a little clarity in this whole “price-taking” versus “market power” part of the debate. I’ll circle back to the actual “mathiness” issue at the end of the post.
There are really two questions we are dealing with here. First, do inputs to production earn their marginal product? Second, do the owners of non-rival ideas have market power or not? We can answer the first without having to answer the second.
Just to refresh, a production function tells us that output is determined by some combination of non-rival inputs and rival inputs. Non-rival inputs are things like ideas that can be used by many firms or people at once without limiting the use by others. Think of blueprints. Rival inputs are things that can only be used by one person or firm at a time. Think of nails. The income earned by both rival and non-rival inputs has to add up to total output.
Okay, given all that setup, here are three statements that could be true.
  1. Output is constant returns to scale in rival inputs
  2. Non-rival inputs receive some portion of output
  3. Rival inputs receive output equal to their marginal product
Pick two.
Romer’s argument is that (1) and (2) are true. (1) he asserts through replication arguments, like my example of replicating Earth. (2) he takes as an empirical fact. Therefore, (3) cannot be true. If the owners of non-rival inputs are compensated in any way, then it is necessarily true that rival inputs earn less than their marginal product. Notice that I don’t need to say anything about how the non-rival inputs are compensated here. But if they earn anything, then from Romer’s assumptions the rival inputs cannot be earning their marginal product.
Different authors have made different choices than Romer. McGrattan and Prescott abandoned (1) in favor of (2) and (3). Boldrin and Levine dropped (2) and accepted (1) and (3). Romer’s issue with these papers is that (1) and (2) are clearly true, so writing down a model that abandons one of these assumptions gives you a model that makes no sense in describing growth. ...
The “mathiness” comes from authors trying to elide the fact that they are abandoning (1) or (2). ...

[There's a lot more in the full post. Also, Romer comments on Vollrath here.]

Saturday, June 06, 2015

'A Crisis at the Edge of Physics'

Seems like much the same can be said about modern macroeconomics (except perhaps the "given the field its credibility" part):

A Crisis at the Edge of Physics, by Adam Frank and Marcelo Gleiser, NY Times: Do physicists need empirical evidence to confirm their theories?
You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”
Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility. ...

Wednesday, June 03, 2015

'Coordination Equilibrium and Price Stickiness'

This is the introduction to a relatively new working paper by Cidgem Gizem Korpeoglu and Stephen Spear (sent in response to my comment that I've been disappointed with the development of new alternatives to the standard NK-DSGE models):

Coordination Equilibrium and Price Stickiness, by Cidgem Gizem Korpeoglu (University College London) Stephen E. Spear (Carnegie Mellon): 1 Introduction Contemporary macroeconomic theory rests on the three pillars of imperfect competition, nominal price rigidity, and strategic complementarity. Of these three, nominal price rigidity (aka price stickiness) has been the most important. The stickiness of prices is a well-established empirical fact, with early observations about the phenomenon going back to Alfred Marshall. Because the friction of price stickiness cannot occur in markets with perfect competition, modern micro-founded models (New Keynesian or NK models, for short) have been forced to abandon the standard Arrow-Debreu paradigm of perfect competition in favor of models where agents have market power and set market prices for their own goods. Strategic complementarity enters the picture as a mechanism for explaining the kinds of coordination failures that lead to sustained slumps like the Great Depression or the aftermath of the 2008 …financial crisis. Early work by Cooper and John laid out the importance of these three features for macroeconomics, and follow-on work by Ball and Romer showed that failure to coordinate on price adjustments could itself generate strategic complementarity, effectively unifying two of the three pillars.
Not surprisingly, the Ball and Romer work was based on earlier work by a number of authors (see Mankiw and Romer's New Keynesian Economics) which used the model of Dixit and Stiglitz of monopolistic competition as the basis for price-setting behavior in a general equilibrium setting, combined with the idea of menu costs -- literally the cost of posting and communicating price changes -- and exogenously-specified adjustment time staggering to provide the friction(s) leading to nominal rigidity. While these models perform well in explaining aspects of the business cycle, they have only recently been subjected to what one would characterize as thorough empirical testing, because of the scarcity of good data on how prices actually change. This has changed in the past decade as new sources of data on price dynamics have become available, and as computational power capable of teasing out what might be called the "…fine structure" of these dynamics has emerged. On a different dimension, the overall suitability of monopolistic competition as the appropriate form of market imperfection to use as the foundation of the new macro models has been largely unquestioned, though we believe this is largely due to the tractability of the Dixit-Stiglitz model relative to other models of imperfect competition generated by large …fixed costs or increasing returns to scale not due to specialization.
In this paper, we examine both of these underlying assumptions in light of what the new empirics on pricing dynamics has found, and propose a different, and we believe, better microfoundation for New Keynesian macroeconomics based on the Shapley-Shubik market game.

Monday, June 01, 2015

'Q&A: Richard Thaler'

Some snippets from a Justin Fox interview of Richard Thaler:

Question: What are the most valuable things that you got out of studying economics in graduate school?
Answer: The main thing that you learn in grad school, or should learn, is how to think like an economist. The rest is just math. You learn a bunch of models and you learn econometrics -- it’s tools. Some people who are so inclined concentrate on having sharper tools than anybody else, but my tools were kind of dull. So the main thing was just, “How does an economist think about a problem?” And then in my case it was, “That’s really how they think about a problem?” ...
Q: One thing that’s striking -- Kahneman calls it “theory-induced blindness” -- is this sense that if you have a really nice theory, even when someone throws up evidence that the theory doesn’t hold there’s this strong disposition to laugh at the evidence and not even present counterevidence.
A: Because they know you’re wrong. This has happened to me. It still happens. Not as often, but people have what economists would call strong priors. Even our football paper, which may be my favorite of all my papers, we had a hell of a time getting that published,... people saying, “That can’t be right, firms wouldn’t leave that much money on the table.” Economists can think that because none of them have ever worked in firms. Anybody who’s ever been in a large organization realizes that optimizing is not a word that would often be used to describe any large organization. The reason is that it’s full of people, who are complicated. ...
Q: The whole idea of nudging, generally it’s been very popular, but there’s a subgroup of people who react so allergically to it.
A:  I think most of the criticism at least here comes from the right. These are choice-preserving policies. Now you’d think they would like those. I have yet to read a criticism that really gets the point that we probably make on the fourth page of the book, which is that there’s no avoiding nudging. Like in a cafeteria: You have to arrange the food somehow. You can’t arrange it at random. That would be a chaotic cafeteria. ...

'The Case of the Missing Minsky'

Paul Krugman says I'm not upbeat enough about the state of macroeconomics:

The Case of the Missing Minsky: Gavyn Davis has a good summary of the recent IMF conference on rethinking macro; Mark Thoma has further thoughts. Thoma in particular is disappointed that there hasn’t been more of a change, decrying

the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Maybe surprisingly, I’m a bit more upbeat than either. Of course there are economists, and whole departments, that have learned nothing, and remain wholly dominated by mathiness. But it seems to be that economists have done OK on two of the big three questions raised by the economic crisis. What are these three questions? I’m glad you asked. ...[continue]...

Sunday, May 31, 2015

'Has the Rethinking of Macroeconomic Policy Been Successful?'

The beginning of a long discussion from Gavyn Davies:

Has the rethinking of macroeconomic policy been successful?: The great financial crash of 2008 was expected to lead to a fundamental re-thinking of macro-economics, perhaps leading to a profound shift in the mainstream approach to fiscal, monetary and international policy. That is what happened after the 1929 crash and the Great Depression, though it was not until 1936 that the outline of the new orthodoxy appeared in the shape of Keynes’ General Theory. It was another decade or more before a simplified version of Keynes was routinely taught in American university economics classes. The wheels of intellectual change, though profound in retrospect, can grind fairly slowly.
Seven years after 2008 crash, there is relatively little sign of a major transformation in the mainstream macro-economic theory that is used, for example, by most central banks. The “DSGE” (mainly New Keynesian) framework remains the basic workhorse, even though it singularly failed to predict the crash. Economists have been busy adding a more realistic financial sector to the structure of the model [1], but labour and product markets, the heart of the productive economy, remain largely untouched.
What about macro-economic policy? Here major changes have already been implemented, notably in banking regulation, macro-prudential policy and most importantly the use of the central bank balance sheet as an independent instrument of monetary policy. In these areas, policy-makers have acted well in advance of macro-economic researchers, who have been struggling to catch up. ...

There has been more progress on the theoretical front than I expected, particularly in adding financial sector frictions to the NK-DSGE framework and in overcoming the restrictions imposed by the representative agent model. At the same time, there has been less progress than I expected in developing alternatives to the standard models. As far as I can tell, a serious challenge to the standard model has not yet appeared. My biggest disappointment is how much resistance there has been to the idea that we need to even try to find alternative modeling structures that might do better than those in use now, and the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Tuesday, May 19, 2015

'The Most Misleading Definition in Economics'

John Quiggin:

The most misleading definition in economics (draft excerpt from Economics in Two Lessons), by  John Quiggin: After a couple of preliminary posts, here goes with my first draft excerpt from my planned book on Economics in Two Lessons. They won’t be in any particular order, just tossed up for comment when I think I have something that might interest readers here. To remind you, the core idea of the book is that of discussing all of economic policy in terms of “opportunity cost”. My first snippet is about
Pareto optimality
The situation where there is no way to make some people better off without making anyone worse off is often referred to as “Pareto optimal” after the Italian economist and political theorist Vilfredo Pareto, who developed the underlying concept. “Pareto optimal” is arguably, the most misleading term in economics (and there are plenty of contenders). ...

Describing a situation as “optimal” implies that it is the unique best outcome. As we shall see this is not the case. Pareto, and followers like Hazlitt, seek to claim unique social desirability for market outcomes by definition rather than demonstration. ...

If that were true, then only the market outcome associated with the existing distribution of property rights would be Pareto optimal. Hazlitt, like many subsequent free market advocates, implicitly assumes that this is the case. In reality, though there are infinitely many possible allocations of property rights, and infinitely many allocations of goods and services that meet the definition of “Pareto optimality”. A highly egalitarian allocation can be Pareto optimal. So can any allocation where one person has all the wealth and everyone else is reduced to a bare subsistence. ...

Sunday, May 17, 2015

'Blaming Keynes'

Simon Wren-Lewis:

Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...

'Ed Prescott is No Robert Solow, No Gary Becker'

Paul Romer continues his assault on "mathiness":

Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.

There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.

Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.

What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?

The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...

[He goes on to give more details using examples from the papers.]

Friday, May 15, 2015

'Mathiness in the Theory of Economic Growth'

Paul Romer:

My Paper “Mathiness in the Theory of Economic Growth”: I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website. A short version of the supporting appendix is available here. It should eventually be available on the AER website but has not been posted yet. A longer version with more details behind the calculations is available here.

The point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.

The alternative to science is academic politics, where persistent disagreement is encouraged as a way to create distinctive sub-group identities.

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning.

I focus on mathiness in growth models because growth is the field I know best, one that gave me a chance to observe closely the behavior I describe. ...

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. ... Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

[I cut quite a bit -- see the full post for more.]

Wednesday, May 06, 2015

'Richard Thaler Misbehaves–or, Rather, Behaves'

Brad DeLong:

Richard Thaler Misbehaves–or, Rather, Behaves: A good review by Jonathan Knee of the exteremely-sharp Richard Thaler’s truly excellent new book, Misbehaving. The intellectual evolution of the Chicago School is very interesting indeed. Back in 1950 Milton Friedman would argue that economists should reason as if people were rational optimizers as long as such reasoning produce predictions about economic variables–prices and quantities–that fit the the data. He left to one side the consideration even if the prices and quantities were right the assessments of societal well-being would be wrong.

By the time I entered the profession 30 years later, however, the Chicago School–but not Milton Friedman–had evolved so that it no longer cared whether its models actually fit the data or not. The canonical Chicago empirical paper seized the high ground of the null hypothesis for the efficient market thesis and then carefully restricted the range and type of evidence allowed into the room to achieve the goal of failing to reject the null at 0.05 confidence. The canonical Chicago theoretical paper became an explanation of why a population of rational optimizing agents could route around and neutralize the impact of any specified market failure.

Note that Friedman and to a lesser degree Stigler had little patience with these lines of reasoning. Friedman increasingly based his policy recommendations on the moral value of free choice to live one’s life as one thought best–thinking that for people to be told or even nudged what to do–and on the inability of voters to have even a chance of curbing government failures arising out of bureaucracy, machine corruption, plutocratic corruption, and simply the poorly-informed do-gooder “there oughta be a law!” impulse. Stigler tended to focus on the incoherence and complexity of government policy in, for example, antitrust: arising out of a combination of scholastic autonomous legal doctrine development and of legislatures that at different times had sought to curb monopoly, empower small-scale entrepreneurs, protect large-scale intellectual and other property interests, and promote economies of scale. At the intellectual level making the point that the result was incoherent and substantially self-neutralizing policy was easy–but it was not Stigler but rather later generations eager to jump to the unwarranted conclusion that we would be better off with the entire edifice razed to the ground.

As I say often, doing real economics is very very hard. You have to start with how people actually behave, with what the institutions are that curb or amplify their behavioral and calculation successes or failures at choosing rational actions, and with what emergent regularities we see in the aggregates. And I have been often struck by Chicago-School baron Robert Lucas’s declaration that we cannot hope to do real economics–that all we can do is grind out papers on how the economy would behave if institutions were transparent and all humans were rational optimizers, for both actual institutions and actual human psychology remain beyond our grasp:

Economics tries to… make predictions about the way… 280 million people are going to respond if you change something…. Kahnemann and Tversky… can’t even tell us anything interesting about how a couple that’s been married for ten years splits or makes decisions about what city to live in–let alone 250 million…. We’re not going to build up useful economics… starting from individuals…. Behavioral economics should be on the reading list…. But… as an alternative to what macroeconomics or public finance people are doing… it’s not going to come from behavioral economics… at least in my lifetime…

Yet it is not impossible to do real economics, and thus to be a good economist.

But it does mean that, as John Maynard Keynes wrote in his 1924 obituary for his teacher Alfred Marshall, while:

the study of economics does not seem to require any specialised gifts of an unusually high order…. Is it not… a very easy subject compared with the higher branches of philosophy and pure science? Yet good, or even competent, economists are the rarest of birds.

And Keynes continues:

An easy subject, at which very few excel! The paradox finds its explanation, perhaps, in that the master-economist must possess a rare combination of gifts… mathematician, historian, statesman, philosopher… understand symbols and speak in words… contemplate the particular in terms of the general… touch abstract and concrete in the same flight of thought… study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must lie entirely outside his regard…. Much, but not all, of this ideal many-sidedness Marshall possessed…

John Maynard Keynes would see Richard Thaler as a very good economist indeed. ...

Sunday, April 26, 2015

On Microfoundations

Via Diane Coyle, a quote from Alfred Marshall’s Elements of the Economics of Industry:

...He wrote that earlier economists:
“Paid almost exclusive attention to the motives of individual action, But it must not be forgotten that economists, like all other students of social science, are concerned with individuals chiefly as members of the social organism. As a cathedral is something more than the stones of which it is built, as a person is more than a series of thoughts and feelings, so the life of society is something more than the sum of the lives of its individual members. It is true that the action of the whole is made up of that of its constituent parts; and that in most economic problems the best starting point is to be found in the motives that affect the individual….. but it is also true that economics has a great and increasing concern in motives connected with the collective ownership of property and the collective pursuit of important aims.”

Tuesday, April 21, 2015

'Rethinking Macroeconomic Policy'

Olivier Blanchard at Vox EU:

Rethinking macroeconomic policy: Introduction, by Olivier Blanchard: On 15 and 16 April 2015, the IMF hosted the third conference on “Rethinking Macroeconomic Policy”. I had initially chosen as the title and subtitle “Rethinking Macroeconomic Policy III. Down in the trenches”.1 I thought of the first conference in 2011 as having identified the main failings of previous policies, the second conference in 2013 as having identified general directions, and this conference as a progress report.
My subtitle was rejected by one of the co-organisers, namely Larry Summers. He argued that I was far too optimistic, that we were nowhere close to knowing where were going. Arguing with Larry is tough, so I chose an agnostic title, and shifted to “Rethinking Macro Policy III. Progress or confusion?”
Where do I think we are today? I think both Larry and I are right. I do not say this for diplomatic reasons. We are indeed proceeding in the trenches. But where the trenches are eventually going remains unclear. This is the theme I shall develop in my remarks, focusing on macroprudential tools, monetary policy, and fiscal policy.

Continue reading "'Rethinking Macroeconomic Policy'" »

Monday, April 13, 2015

In Defense of Modern Macroeconomic Theory

A small part of a much longer post from David Andolfatto (followed by some comments of my own):

In defense of modern macro theory: The 2008 financial crisis was a traumatic event. Like all social trauma, it invoked a variety of emotional responses, including the natural (if unbecoming) human desire to find someone or something to blame. Some of the blame has been directed at segments of the economic profession. It is the nature of some of these criticisms that I'd like to talk about today. ...
The dynamic general equilibrium (DGE) approach is the dominant methodology in macro today. I think this is so because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility--a property that permits economists of all political persuasions to make use of the apparatus. ...

The point I want to make here is not that the DGE approach is the only way to go. I am not saying this at all. In fact, I personally believe in the coexistence of many different methodologies. The science of economics is not settled, after all. The point I am trying to make is that the DGE approach is not insensible (despite the claims of many critics who, I think, are sometimes driven by non-scientific concerns). ...

Once again (lest I be misunderstood, which I'm afraid seems unavoidable these days) I am not claiming that DGE is the be-all and end-all of macroeconomic theory. There is still a lot we do not know and I think it would be a good thing to draw on the insights offered by alternative approaches. I do not, however, buy into the accusation that there "too much math" in modern theory. Math is just a language. Most people do not understand this language and so they have a natural distrust of arguments written in it. .... Before criticizing, either learn the language or appeal to reliable translations...

As for the teaching of macroeconomics, if the crisis has led more professors to pay more attention to financial market frictions, then this is a welcome development. I also fall in the camp that stresses the desirability of teaching more economic history and placing greater emphasis on matching theory with data. ... Thus, one could reasonably expect a curriculum to be modified to include more history, history of thought, heterodox approaches, etc. But this is a far cry from calling for the abandonment of DGE theory. Do not blame the tools for how they were (or were not) used.

I've said a lot of what David says about modern macroeconomic models at one time or another in the past, for example it's not the tools of macroeconomics, it's how they are used. But I do think he leaves out one important factor, the need to ask the right question (and why we didn't prior to the crisis). This is from August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. We ... have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. ...

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices. ...

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

Here's an interesting quote from Thomas Sargent along the same lines:

The criticism of real business cycle models and their close cousins, the so-called New Keynesian models, is misdirected and reflects a misunderstanding of the purpose for which those models were devised.6 These models were designed to describe aggregate economic fluctuations during normal times when markets can bring borrowers and lenders together in orderly ways, not during financial crises and market breakdowns.

Which to me is another way of saying we didn't foresee the need to ask questions (and build models) that would be useful in a financial crisis -- we were focused on models that would explain "normal times" (which is connected to the fact that we thought the Great Moderation would continue due to arrogance on behalf of economists leading to the belief that modern policy tools, particularly from the Fed, would prevent major meltdowns, financial or otherwise). That is happening now, so we'll be much more prepared if history repeats itself, but I have to wonder what other questions we should be asking, but aren't.

Let me add one more thing (a few excerpts from a post in 2010) about the sociology within economics:

I want to follow up on the post highlighting attempts to attack the messengers -- attempts to discredit Brad DeLong and Paul Krugman on macroeconomic policy in particular -- rather than engage academically with the message they are delivering (Krugman's response). ...
One of the objections often raised is that Krugman and DeLong are not, strictly speaking, macroeconomists. But if Krugman, DeLong, and others are expressing the theoretical and empirical results concerning macroeconomic policy accurately, does it really matter if we can strictly classify them as macroeconomists? Why is that important except as an attempt to discredit the message they are delivering? ... Attacking people rather than discussing ideas avoids even engaging on the issues. And when it comes to the ideas -- here I am talking most about fiscal policy -- as I've already noted in the previous post, the types of policies Krugman, DeLong, and others have advocated (and I should include myself as well) can be fully supported using modern macroeconomic models. ...
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.

Tuesday, April 07, 2015

In Search of Better Macroeconomic Models

I have a new column:

In Search of Better Macroeconomic Models: Modern macroeconomic models did not perform well during the Great Recession. What needs to be done to fix them? Can the existing models be patched up, or are brand new models needed? ...

It's mostly about the recent debate on whether we need microfoundations in macroeconomics.

Saturday, April 04, 2015

'Do not Underestimate the Power of Microfoundations'

Simon Wren-Lewis takes a shot at answering Brad DeLong's question about microfoundations:

Do not underestimate the power of microfoundations: Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. ... Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”...
Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency. ...
 I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However..., it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.
So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. ... In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.
I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. ... That makes discussion difficult, but I’m not sure it makes it impossible.

Wednesday, March 25, 2015

'Anti-Keynesian Delusions'

Paul Krugman continues the discussion on the use of the Keynesian model:

Anti-Keynesian Delusions: I forgot to congratulate Mark Thoma on his tenth blogoversary, so let me do that now. ...
Today Mark includes a link to one of his own columns, a characteristically polite and cool-headed response to the latest salvo from David K. Levine. Brad DeLong has also weighed in, less politely.
I’d like to weigh in with a more general piece of impoliteness, and note a strong empirical regularity in this whole area. Namely, whenever someone steps up to declare that Keynesian economics is logically and empirically flawed, has been proved wrong and refuted, you know what comes next: a series of logical and empirical howlers — crude errors of reasoning, assertions of fact that can be checked and rejected in a minute or two.
Levine doesn’t disappoint. ...

He goes on to explain in detail.

Update: Brad DeLong also comments.

Tuesday, March 24, 2015

'Macro Wars: The Attack of the Anti-Keynesians'

I have a new column:

Macro Wars: The Attack of the Anti-Keynesians, by Mark Thoma: The ongoing war between the Keynesians and the anti-Keynesians appears to be heating up again. The catalyst for this round of fighting is The Keynesian Illusion by David K. Levine, which elicited responses such as this and this from Brad DeLong and Nick Rowe.
The debate is about the source of economic fluctuations and the government’s ability to counteract them with monetary and fiscal policy. One of the issues is the use of “old fashioned” Keynesian models – models that have supposedly been rejected by macroeconomists in favor of modern macroeconomic models – to explain and understand the Great Recession and to make monetary and fiscal policy recommendations. As Levine says, “Robert Lucas, Edward Prescott, and Thomas Sargent … rejected Keynesianism because it doesn't work… As it happens we have developed much better theories…”
I believe the use of “old-fashioned” Keynesian models to analyze the Great Recession can be defended. ...

Wednesday, March 18, 2015

'Is the Walrasian Auctioneer Microfounded?'

Simon Wren-Lewis (he says this is "For macroeconomists"):

Is the Walrasian Auctioneer microfounded?: I found this broadside against Keynesian economics by David K. Levine interesting. It is clear at the end that he is child of the New Classical revolution. Before this revolution he was far from ignorant of Keynesian ideas. He adds: “Knowledge of Keynesianism and Keynesian models is even deeper for the great Nobel Prize winners who pioneered modern macroeconomics - a macroeconomics with people who buy and sell things, who save and invest - Robert Lucas, Edward Prescott, and Thomas Sargent among others. They also grew up with Keynesian theory as orthodoxy - more so than I. And we rejected Keynesianism because it doesn't work not because of some aesthetic sense that the theory is insufficiently elegant.”
The idea is familiar: New Classical economists do things properly, by founding their analysis in the microeconomics of individual production, savings and investment decisions. [2] It is no surprise therefore that many of today’s exponents of this tradition view their endeavour as a natural extension of the Walrasian General Equilibrium approach associated with Arrow, Debreu and McKenzie. But there is one agent in that tradition that is as far from microfoundations as you can get: the Walrasian auctioneer. It is this auctioneer, and not people, who typically sets prices. ...
Now your basic New Keynesian model contains a huge number of things that remain unrealistic or are just absent. However I have always found it extraordinary that some New Classical economists declare such models as lacking firm microfoundations, when these models at least try to make up for one area where RBC models lack any microfoundations at all, which is price setting. A clear case of the pot calling the kettle black! I have never understood why New Keynesians can be so defensive about their modelling of price setting. Their response every time should be ‘well at least it’s better than assuming an intertemporal auctioneer’.[1] ...
As to the last sentence in the quote from Levine above, I have talked before about the assertion that Keynesian economics did not work, and the implication that RBC models work better. He does not talk about central banks, or monetary policy. If he had, he would have to explain why most of the people working for them seem to believe that New Keynesian type models are helpful in their job of managing the economy. Perhaps these things are not mentioned because it is so much easier to stay living in the 1980s, in those glorious days (for some) when it appeared as if Keynesian economics had been defeated for good.

Saturday, March 14, 2015

'John and Maynard’s Excellent Adventure'

Paul Krugman defends IS-LM analysis (I'd make one qualification. Models are built to answer specific questions, we do not have one grand unifying model to use for all questions. IS-LM models were built to answer exactly the kinds of questions we encountered during the Great Recession, and the IS-LM model provided good answers (especially if one remembers where the model encounters difficulties). DSGE models were built to address other issues, and it's not surprising they didn't do very well when they were pushed to address questions they weren't designed to answer. The best model to use depends upon the question one is asking):

John and Maynard’s Excellent Adventure: When I tell people that macroeconomic analysis has been triumphantly successful in recent years, I tend to get strange looks. After all, wasn’t everyone predicting lots of inflation? Didn’t policymakers get it all wrong? Haven’t the academic economists been squabbling nonstop?
Well, as a card-carrying economist I disavow any responsibility for Rick Santelli and Larry Kudlow; I similarly declare that Paul Ryan and Olli Rehn aren’t my fault. As for the economists’ disputes, well, let me get to that in a bit.
I stand by my claim, however. The basic macroeconomic framework that we all should have turned to, the framework that is still there in most textbooks, performed spectacularly well: it made strong predictions that people who didn’t know that framework found completely implausible, and those predictions were vindicated. And the framework in question – basically John Hicks’s interpretation of John Maynard Keynes – was very much the natural way to think about the issues facing advanced countries after 2008. ...
I call this a huge success story – one of the best examples in the history of economics of getting things right in an unprecedented environment.
The sad thing, of course, is that this incredibly successful analysis didn’t have much favorable impact on actual policy. Mainly that’s because the Very Serious People are too serious to play around with little models; they prefer to rely on their sense of what markets demand, which they continue to consider infallible despite having been wrong about everything. But it also didn’t help that so many economists also rejected what should have been obvious.
Why? Many never learned simple macro models – if it doesn’t involve microfoundations and rational expectations, preferably with difficult math, it must be nonsense. (Curiously, economists in that camp have also proved extremely prone to basic errors of logic, probably because they have never learned to work through simple stories.) Others, for what looks like political reasons, seemed determined to come up with some reason, any reason, to be against expansionary monetary and fiscal policy.
But that’s their problem. From where I sit, the past six years have been hugely reassuring from an intellectual point of view. The basic model works; we really do know what we’re talking about.

[The original is quite a bit longer.]

Thursday, March 05, 2015

'Economists' Biggest Failure'

Noah Smith:

Economists' Biggest Failure: One of the biggest things that economists get grief about is their failure to predict big events like recessions. ... 
Pointing this out usually leads to the eternal (and eternally fun) debate over whether economics is a real science. The profession's detractors say that if you don’t make successful predictions, you aren’t a science. Economists will respond that seismologists can’t forecast earthquakes, and meteorologists can’t forecast hurricanes, and who cares what’s really a “science” anyway. 
The debate, however, misses the point. Forecasts aren’t the only kind of predictions a science can make. In fact, they’re not even the most important kind. 
Take physics for example. Sometimes physicists do make forecasts -- for example, eclipses. But those are the exception. Usually, when you make a new physics theory, you use it to predict some new phenomenon... For example, quantum mechanics has gained a lot of support from predicting the strange new things like quantum tunneling or quantum teleportation.
Other times, a theory will predict things we have seen before, but will describe them in terms of other things that we thought were totally separate, unrelated phenomena. This is called unification, and it’s a key part of what philosophers think science does. For example, the theory of electromagnetism says that light, electric current, magnetism, radio waves are all really the same phenomenon. Pretty neat! ...
So that’s physics. What about economics? Actually, econ has a number of these successes too. When Dan McFadden used his Random Utility Model to predict how many people would ride San Francisco's Bay Area Rapid Transit system,... he got it right. And he got many other things right with the same theory -- it wasn’t developed to explain only train ridership. 
Unfortunately, though, this kind of success isn't very highly regarded in the economics world... Maybe now, with the ascendance of empirical economics and a decline in theory, we’ll see a focus on producing fewer but better theories, more unification, and more attempts to make novel predictions. Someday, maybe macroeconomists will even be able to make forecasts! But let’s not get our hopes up.

I've addressed this question many times, e.g. in 2009, and to me the distinction is between forecasting the future, and understanding why certain phenomena occur (re-reading, it's a bit repetitive):

Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. On prediction in economics, perhaps someday improved models will allow us to do better than we have so far at predicting the exact timing of crises, and I think that earthquakes provide some guidance here. You have to ask first if stress is building in a particular sector, and then ask if action needs to be taken because the stress has reached dangerous levels, levels that might result in a big crash rather than a series of small stress relieving adjustments. I don't think our models are very good at detecting accumulating stress...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur. 
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place.
We haven't failed at this entirely though. For example, we have had some success at putting safeguards into place before shocks occur, automatic stabilizers have done a lot to insulate against the negative consequences of the recession (though they could have been larger to stop the building from swaying as much as it has). So it's not proper to say that our models have not helped us to prepare in advance at all, the insulation social insurance programs provide is extremely important to recognize. But it is the case that we could have and should have done better at preparing before the shock hit.
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.

Wednesday, December 17, 2014

'Minimal Model Explanations'

Some of you might find this interesting:

“Minimal Model Explanations,” R.W. Batterman & C.C. Rice (2014), A Fine Theorem: I unfortunately was overseas and wasn’t able to attend the recent Stanford conference on Causality in the Social Sciences; a friend organized the event and was able to put together a really incredible set of speakers: Nancy Cartwright, Chuck Manski, Joshua Angrist, Garth Saloner and many others. Coincidentally, a recent issue of the journal Philosophy of Science had an interesting article quite relevant to economists interested in methodology: how is it that we learn anything about the world when we use a model that is based on false assumptions? ...

Sunday, December 14, 2014

Real Business Cycle Theory

Roger Farmer:

Real business cycle theory and the high school Olympics: I have lost count of the number of times I have heard students and faculty repeat the idea in seminars, that “all models are wrong”. This aphorism, attributed to George Box,  is the battle cry  of the Minnesota calibrator, a breed of macroeconomist, inspired by Ed Prescott, one of the most important and influential economists of the last century.
Of course all models are wrong. That is trivially true: it is the definition of a model. But the cry  has been used for three decades to poke fun at attempts to use serious econometric methods to analyze time series data. Time series methods were inconvenient to the nascent Real Business Cycle Program that Ed pioneered because the models that he favored were, and still are, overwhelmingly rejected by the facts. That is inconvenient. Ed’s response was pure genius. If the model and the data are in conflict, the data must be wrong. ...

After explaining, he concludes:

We don't have to play by Ed's rules. We can use the methods developed by Rob Engle and Clive Granger as I have done here. Once we allow aggregate demand to influence permanently the unemployment rate, the data do not look kindly on either real business cycle models or on the new-Keynesian approach. It's time to get serious about macroeconomic science...

Thursday, November 27, 2014

MarkSpeaks

Simon Wren-Lewis:

As Mark Thoma often says, the problem is with macroeconomists rather than macroeconomics.

Much, much more here.

Tuesday, November 18, 2014

How Piketty Has Changed Economics

I have a new column:

How Piketty Has Changed Economics: Thomas Piketty’s Capital in the Twenty-First Century is beginning to receive book of the year awards, but has it changed anything within economics? There are two ways in which is has...

I'm not sure everyone will agree that the changes will persist. [This is a long-run view that begins with Adam Smith and looks for similarities between the past and today.]

Update: Let me add that although many people believe that the most important questions in the future will be about production (as it was in Smith's time), secular stagnation, robots, etc., I believe we will have enough "stuff", the big questions will be about distribution (as it was when Ricardo, Marx, etc. were writing).

Thursday, October 23, 2014

'How Mainstream Economic Thinking Imperils America'

Tuesday, October 21, 2014

'Why our Happiness and Satisfaction Should Replace GDP in Policy Making'

Richard Easterlin:

Why our happiness and satisfaction should replace GDP in policy making, The Conversation: Since 1990, GDP per person in China has doubled and then redoubled. With average incomes multiplying fourfold in little more than two decades, one might expect many of the Chinese people to be dancing in the streets. Yet, when asked about their satisfaction with life, they are, if anything, less satisfied than in 1990.
The disparity indicated by these two measures of human progress, Gross Domestic Product and Subjective Well Being (SWB), makes pretty plain the issue at hand. GDP, the well-being indicator commonly used in policy circles, signals an outstanding advance in China. SWB, as indicated by self-reports of overall satisfaction with life, suggests, if anything, a worsening of people’s lives. Which measure is a more meaningful index of well-being? Which is a better guide for public policy?
A few decades ago, economists – the most influential social scientists shaping public policy – would have said that the SWB result for China demonstrates the meaninglessness of self-reports of well-being. Economic historian Deirdre McCloskey, writing in 1983, aptly put the typical attitude of economists this way:
Unlike other social scientists, economists are extremely hostile towards questionnaires and other self-descriptions… One can literally get an audience of economists to laugh out loud by proposing ironically to send out a questionnaire on some disputed economic point. Economists… are unthinkingly committed to the notion that only the externally observable behaviour of actors is admissible evidence in arguments concerning economics.
Culture clash
But times have changed. A commission established by the then French president, Nicolas Sarkozy in 2008 and charged with recommending alternatives to GDP as a measure of progress, stated bluntly (my emphasis):
Research has shown that it is possible to collect meaningful and reliable data on subjective as well as objective well-being … The types of questions that have proved their value within small-scale and unofficial surveys should be included in larger-scale surveys undertaken by official statistical offices.
This 25-member commission was comprised almost entirely of economists, five of whom had won the Nobel Prize in economics. Two of the five co-chaired the commission.
These days the tendency with new measures of our well-being – such as life satisfaction and happiness – is to propose that they be used as a complement to GDP. But what is one to do when confronted with such a stark difference between SWB and GDP, as in China? What should one say? People in China are better off than ever before, people are no better off than before, or “it depends”?
Commonalities
To decide this issue, we need to delve deeper into what has happened in China. When we do that, the superiority of SWB becomes apparent: it can capture the multiple dimensions of people’s lives. GDP, in contrast, focuses exclusively on the output of material goods.
People everywhere in the world spend most of their time trying to earn a living and raise a healthy family. The easier it is for them to do this, the happier they are. This is the lesson of a 1965 classic, The Pattern of Human Concerns, by public opinion survey specialist Hadley Cantril. In the 12 countries – rich and poor, communist and non-communist – that Cantril surveyed, the same highly personal concerns dominated determinants of happiness: standard of living, family, health and work. Broad social issues such as inequality, discrimination and international relations, were rarely mentioned.
Urban China in 1990 was essentially a mini-welfare state. Workers had what has been called an “iron rice bowl” – they were assured of jobs, housing, medical services, pensions, childcare and jobs for their grown children.
With the coming of capitalism, and “restructuring” of state enterprises, the iron rice bowl was smashed and these assurances went out the window. Unemployment soared and the social safety net disappeared. The security that workers had enjoyed was gone and the result was that life satisfaction plummeted, especially among the less-educated, lower-income segments of the population.
Although working conditions have improved somewhat in the past decade, the shortfall from the security enjoyed in 1990 remains substantial. The positive effect on well-being of rising incomes has been negated by rapidly rising material aspirations and the emergence of urgent concerns about income and job security, family, and health.
The case to replace
Examples of the disparity between SWB and GDP as measures of well-being could easily be multiplied. Since the early 1970s real GDP per capita in the US has doubled, but SWB has, if anything, declined. In international comparisons, Costa Rica’s per capita GDP is a quarter of that in the US, but Costa Ricans are as happy or happier than Americans when we look at SWB data. Clearly there is more to people’s well-being that the output of goods.
There are some simple, yet powerful arguments to say that we should use SWB in preference to GDP, not just as a complement. For a start, those SWB measures like happiness or life satisfaction are more comprehensive than GDP. They take into account the effect on well-being not just of material living conditions, but of the wide range of concerns in our lives.
It is also key that with SWB, the judgement of well-being is made by the individuals affected. GDP’s reliance on outside statistical “experts” to make inferences based on a measure they themselves construct looks deeply flawed when viewed in comparison. These judgements by outsiders also lie behind the growing number of multiple-item measures being put forth these days. An example is the United Nations’ Human Development Index (HDI) which attempts to combine data on GDP with indexes of education and life expectancy.
But people do not identify with measures like HDI (or GDP, of course) to anywhere near the extent that they do with straightforward questions of happiness and satisfaction with life. And crucially, these SWB measures offer each adult a vote and only one vote, whether they are rich or poor, sick or well, native or foreign-born. This is not to say that, as measures of well-being go, SWB is the last word, but clearly it comes closer to capturing what is actually happening to people’s lives than GDP ever will. The question is whether policy makers actually want to know.

Thursday, October 16, 2014

'Regret and Economic Decision-Making'

Here are the conclusions to Regret and economic decision-making:

Conclusions We are clearly a long way from fully understanding how people behave in dynamic contexts. But our experimental data and that of earlier studies (Lohrenz 2007) suggest that regret is a part of the decision process and should not be overlooked. From a theoretical perspective, our work shows that regret aversion and counterfactual thinking make subtle predictions about behaviour in settings where past events serve as benchmarks. They are most vividly illustrated in the investment context.
Our theoretical findings show that if regret is anticipated, investors may keep their hands off risky investments, such as stocks, and not enter the market in the first place. Thus, anticipated regret aversion acts like a surrogate for higher risk aversion.
In contrast, once people have invested, they become very attached to their investment. Moreover, the better past performance was, the higher their commitment, because losses loom larger. This leads the investor to ‘gamble for resurrection’. In our experimental data, we very often observe exactly this pattern.
This dichotomy between ex ante and ex post risk appetites can be harmful for investors. It leads investors and businesses to escalate their commitment because of the sunk costs in their investments. For example, many investors missed out on the 2009 stock market rally while buckling down in the crash in 2007/2008, reluctant to sell early. Similarly, people who quit their jobs and invested their savings into their own business, often cannot with a cold, clear eye cut their losses and admit their business has failed.
Therefore, a better understanding of what motivates people to save and invest could enable us to help them avoid such mistakes, e.g. through educating people to set up clear budgets a priori or to impose a drop dead level for their losses. Such simple measures may help mitigate the effects of harmful emotional attachment and support individuals in making better decisions.

[This ("once people have invested, they become very attached to their investment" and cannot admit failure) includes investment in economic models and research (see previous post).]

Tuesday, October 14, 2014

Economics is Both Positive and Normative

Jean Tirole in the latest TSE Magazine:

Economics is a positive discipline as it aims to document and analyse individual and collective behaviours. It is also, and more importantly, a normative discipline as its main goal is to better the world through economic policies and recommendations.

Friday, September 26, 2014

'The New Classical Clique'

Paul Krugman continues the conversation on New Classical economics::

The New Classical Clique: Simon Wren-Lewis thinks some more about macroeconomics gone astray; Robert J. Waldmann weighs in. For those new to this conversation, the question is why starting in the 1970s much of academic macroeconomics was taken over by a school of thought that began by denying any useful role for policies to raise demand in a slump, and eventually coalesced around denial that the demand side of the economy has any role in causing slumps.
I was a grad student and then an assistant professor as this was happening, albeit doing international economics – and international macro went in a different direction, for reasons I’ll get to in a bit. So I have some sense of what was really going on. And while both Wren-Lewis and Waldmann hit on most of the main points, neither I think gets at the important role of personal self-interest. New classical macro was and still is many things – an ideological bludgeon against liberals, a showcase for fancy math, a haven for people who want some kind of intellectual purity in a messy world. But it’s also a self-promoting clique. ...

Wednesday, September 24, 2014

Where and When Macroeconomics Went Wrong

Simon Wren-Lewis:

Where macroeconomics went wrong: In my view, the answer is in the 1970/80s with the New Classical revolution (NCR). However I also think the new ideas that came with that revolution were progressive. I have defended rational expectations, I think intertemporal theory is the right place to start in thinking about consumption, and exploring the implications of time inconsistency is very important to macro policy, as well as many other areas of economics. I also think, along with nearly all macroeconomists, that the microfoundations approach to macro (DSGE models) is a progressive research strategy.
That is why discussion about these issues can become so confused. New Classical economics made academic macroeconomics take a number of big steps forward, but a couple of big steps backward at the same time. The clue to the backward steps comes from the name NCR. The research program was anti-Keynesian (hence New Classical), and it did not want microfounded macro to be an alternative to the then dominant existing methodology, it wanted to replace it (hence revolution). Because the revolution succeeded (although the victory over Keynesian ideas was temporary), generations of students were taught that Keynesian economics was out of date. They were not taught about the pros and cons of the old and new methodologies, but were taught that the old methodology was simply wrong. And that teaching was/is a problem because it itself is wrong. ...

Thursday, September 11, 2014

'Trapped in the ''Dark Corners'''?

A small part of Brad DeLong's response to Olivier Blanchard. I posted a shortened version of Blanchard's argument a week or two ago:

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

And I responded:

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Here's part of Brad's response:

But… but… but… Macroeconomic policy and financial regulation are not set in such a way as to maintain a healthy distance from dark corners. We are still in a dark corner now. There is no sign of the 4% per year inflation target, the commitments to do what it takes via quantitative easing and rate guidance to attain it, or a fiscal policy that recognizes how the rules of the game are different for reserve currency printing sovereigns when r < n+g. Thus not only are we still in a dark corner, but there is every reason to believe that should we get out the sub-2% per year effective inflation targets of North Atlantic central banks and the inappropriate rhetoric and groupthink surrounding fiscal policy makes it highly likely that we will soon get back into yet another dark corner. Blanchard’s pragmatic answer is thus the most unpragmatic thing imaginable: the “if” test fails, and so the “then” part of the argument seems to me to be simply inoperative. Perhaps on another planet in which North Atlantic central banks and governments aggressively pursued 6% per year nominal GDP growth targets Blanchard’s answer would be “pragmatic”. But we are not on that planet, are we?

Moreover, even were we on Planet Pragmatic, it still seems to be wrong. Using current or any visible future DSGE models for forecasting and mainstream scenario planning makes no sense: the DSGE framework imposes restrictions on the allowable emergent properties of the aggregate time series that are routinely rejected at whatever level of frequentist statistical confidence that one cares to specify. The right road is that of Christopher Sims: that of forecasting and scenario planning using relatively instructured time-series methods that use rather than ignore the correlations in the recent historical data. And for policy evaluation? One should take the historical correlations and argue why reverse-causation and errors-in-variables lead them to underestimate or overestimate policy effects, and possibly get it right. One should not impose a structural DSGE model that identifies the effects of policies but certainly gets it wrong. Sims won that argument. Why do so few people recognize his victory?

Blanchard continues:

Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage…

For the second task, the question is: whose models of tail risk based on what traditions get to count in the tail risks discussion?

And missing is the third task: understanding what Paul Krugman calls the “Dark Age of macroeconomics”, that jahiliyyah that descended on so much of the economic research, economic policy analysis, and economic policymaking communities starting in the fall of 2007, and in which the center of gravity of our economic policymakers still dwell.

Sunday, August 31, 2014

'Where Danger Lurks'

Olivier Blanchard (a much shortened version of his arguments, the entire piece is worth reading):

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Tuesday, August 26, 2014

A Reason to Question the Official Unemployment Rate

[Still on the road ... three quick ones before another long day of driving.]

David Leonhardt:

A New Reason to Question the Official Unemployment Rate: ...A new academic paper suggests that the unemployment rate appears to have become less accurate over the last two decades, in part because of this rise in nonresponse. In particular, there seems to have been an increase in the number of people who once would have qualified as officially unemployed and today are considered out of the labor force, neither working nor looking for work.
The trend obviously matters for its own sake: It suggests that the official unemployment rate – 6.2 percent in July – understates the extent of economic pain in the country today. ... The new paper is a reminder that the unemployment rate deserves less attention than it often receives.
Yet the research also relates to a larger phenomenon. The declining response rate to surveys of almost all kinds is among the biggest problems in the social sciences. ...
Why are people less willing to respond? The rise of caller ID and the decline of landlines play a role. But they’re not the only reasons. Americans’ trust in institutions – including government, the media, churches, banks, labor unions and schools – has fallen in recent decades. People seem more dubious of a survey’s purpose and more worried about intrusions into their privacy than in the past.
“People are skeptical – Is this a real survey? What they are asking me?” Francis Horvath, of the Labor Department, says. ...

Tuesday, August 19, 2014

The Agent-Based Method

Rajiv Sethi:

The Agent-Based Method: It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.

So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 

Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 

---

The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required.