Category Archive for: Methodology [Return to Main]

Tuesday, May 19, 2015

'The Most Misleading Definition in Economics'

John Quiggin:

The most misleading definition in economics (draft excerpt from Economics in Two Lessons), by  John Quiggin: After a couple of preliminary posts, here goes with my first draft excerpt from my planned book on Economics in Two Lessons. They won’t be in any particular order, just tossed up for comment when I think I have something that might interest readers here. To remind you, the core idea of the book is that of discussing all of economic policy in terms of “opportunity cost”. My first snippet is about
Pareto optimality
The situation where there is no way to make some people better off without making anyone worse off is often referred to as “Pareto optimal” after the Italian economist and political theorist Vilfredo Pareto, who developed the underlying concept. “Pareto optimal” is arguably, the most misleading term in economics (and there are plenty of contenders). ...

Describing a situation as “optimal” implies that it is the unique best outcome. As we shall see this is not the case. Pareto, and followers like Hazlitt, seek to claim unique social desirability for market outcomes by definition rather than demonstration. ...

If that were true, then only the market outcome associated with the existing distribution of property rights would be Pareto optimal. Hazlitt, like many subsequent free market advocates, implicitly assumes that this is the case. In reality, though there are infinitely many possible allocations of property rights, and infinitely many allocations of goods and services that meet the definition of “Pareto optimality”. A highly egalitarian allocation can be Pareto optimal. So can any allocation where one person has all the wealth and everyone else is reduced to a bare subsistence. ...

Sunday, May 17, 2015

'Blaming Keynes'

Simon Wren-Lewis:

Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...

'Ed Prescott is No Robert Solow, No Gary Becker'

Paul Romer continues his assault on "mathiness":

Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.

There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.

Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.

What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?

The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...

[He goes on to give more details using examples from the papers.]

Friday, May 15, 2015

'Mathiness in the Theory of Economic Growth'

Paul Romer:

My Paper “Mathiness in the Theory of Economic Growth”: I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website. A short version of the supporting appendix is available here. It should eventually be available on the AER website but has not been posted yet. A longer version with more details behind the calculations is available here.

The point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.

The alternative to science is academic politics, where persistent disagreement is encouraged as a way to create distinctive sub-group identities.

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning.

I focus on mathiness in growth models because growth is the field I know best, one that gave me a chance to observe closely the behavior I describe. ...

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. ... Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

[I cut quite a bit -- see the full post for more.]

Wednesday, May 06, 2015

'Richard Thaler Misbehaves–or, Rather, Behaves'

Brad DeLong:

Richard Thaler Misbehaves–or, Rather, Behaves: A good review by Jonathan Knee of the exteremely-sharp Richard Thaler’s truly excellent new book, Misbehaving. The intellectual evolution of the Chicago School is very interesting indeed. Back in 1950 Milton Friedman would argue that economists should reason as if people were rational optimizers as long as such reasoning produce predictions about economic variables–prices and quantities–that fit the the data. He left to one side the consideration even if the prices and quantities were right the assessments of societal well-being would be wrong.

By the time I entered the profession 30 years later, however, the Chicago School–but not Milton Friedman–had evolved so that it no longer cared whether its models actually fit the data or not. The canonical Chicago empirical paper seized the high ground of the null hypothesis for the efficient market thesis and then carefully restricted the range and type of evidence allowed into the room to achieve the goal of failing to reject the null at 0.05 confidence. The canonical Chicago theoretical paper became an explanation of why a population of rational optimizing agents could route around and neutralize the impact of any specified market failure.

Note that Friedman and to a lesser degree Stigler had little patience with these lines of reasoning. Friedman increasingly based his policy recommendations on the moral value of free choice to live one’s life as one thought best–thinking that for people to be told or even nudged what to do–and on the inability of voters to have even a chance of curbing government failures arising out of bureaucracy, machine corruption, plutocratic corruption, and simply the poorly-informed do-gooder “there oughta be a law!” impulse. Stigler tended to focus on the incoherence and complexity of government policy in, for example, antitrust: arising out of a combination of scholastic autonomous legal doctrine development and of legislatures that at different times had sought to curb monopoly, empower small-scale entrepreneurs, protect large-scale intellectual and other property interests, and promote economies of scale. At the intellectual level making the point that the result was incoherent and substantially self-neutralizing policy was easy–but it was not Stigler but rather later generations eager to jump to the unwarranted conclusion that we would be better off with the entire edifice razed to the ground.

As I say often, doing real economics is very very hard. You have to start with how people actually behave, with what the institutions are that curb or amplify their behavioral and calculation successes or failures at choosing rational actions, and with what emergent regularities we see in the aggregates. And I have been often struck by Chicago-School baron Robert Lucas’s declaration that we cannot hope to do real economics–that all we can do is grind out papers on how the economy would behave if institutions were transparent and all humans were rational optimizers, for both actual institutions and actual human psychology remain beyond our grasp:

Economics tries to… make predictions about the way… 280 million people are going to respond if you change something…. Kahnemann and Tversky… can’t even tell us anything interesting about how a couple that’s been married for ten years splits or makes decisions about what city to live in–let alone 250 million…. We’re not going to build up useful economics… starting from individuals…. Behavioral economics should be on the reading list…. But… as an alternative to what macroeconomics or public finance people are doing… it’s not going to come from behavioral economics… at least in my lifetime…

Yet it is not impossible to do real economics, and thus to be a good economist.

But it does mean that, as John Maynard Keynes wrote in his 1924 obituary for his teacher Alfred Marshall, while:

the study of economics does not seem to require any specialised gifts of an unusually high order…. Is it not… a very easy subject compared with the higher branches of philosophy and pure science? Yet good, or even competent, economists are the rarest of birds.

And Keynes continues:

An easy subject, at which very few excel! The paradox finds its explanation, perhaps, in that the master-economist must possess a rare combination of gifts… mathematician, historian, statesman, philosopher… understand symbols and speak in words… contemplate the particular in terms of the general… touch abstract and concrete in the same flight of thought… study the present in the light of the past for the purposes of the future. No part of man’s nature or his institutions must lie entirely outside his regard…. Much, but not all, of this ideal many-sidedness Marshall possessed…

John Maynard Keynes would see Richard Thaler as a very good economist indeed. ...

Sunday, April 26, 2015

On Microfoundations

Via Diane Coyle, a quote from Alfred Marshall’s Elements of the Economics of Industry:

...He wrote that earlier economists:
“Paid almost exclusive attention to the motives of individual action, But it must not be forgotten that economists, like all other students of social science, are concerned with individuals chiefly as members of the social organism. As a cathedral is something more than the stones of which it is built, as a person is more than a series of thoughts and feelings, so the life of society is something more than the sum of the lives of its individual members. It is true that the action of the whole is made up of that of its constituent parts; and that in most economic problems the best starting point is to be found in the motives that affect the individual….. but it is also true that economics has a great and increasing concern in motives connected with the collective ownership of property and the collective pursuit of important aims.”

Tuesday, April 21, 2015

'Rethinking Macroeconomic Policy'

Olivier Blanchard at Vox EU:

Rethinking macroeconomic policy: Introduction, by Olivier Blanchard: On 15 and 16 April 2015, the IMF hosted the third conference on “Rethinking Macroeconomic Policy”. I had initially chosen as the title and subtitle “Rethinking Macroeconomic Policy III. Down in the trenches”.1 I thought of the first conference in 2011 as having identified the main failings of previous policies, the second conference in 2013 as having identified general directions, and this conference as a progress report.
My subtitle was rejected by one of the co-organisers, namely Larry Summers. He argued that I was far too optimistic, that we were nowhere close to knowing where were going. Arguing with Larry is tough, so I chose an agnostic title, and shifted to “Rethinking Macro Policy III. Progress or confusion?”
Where do I think we are today? I think both Larry and I are right. I do not say this for diplomatic reasons. We are indeed proceeding in the trenches. But where the trenches are eventually going remains unclear. This is the theme I shall develop in my remarks, focusing on macroprudential tools, monetary policy, and fiscal policy.

Continue reading "'Rethinking Macroeconomic Policy'" »

Monday, April 13, 2015

In Defense of Modern Macroeconomic Theory

A small part of a much longer post from David Andolfatto (followed by some comments of my own):

In defense of modern macro theory: The 2008 financial crisis was a traumatic event. Like all social trauma, it invoked a variety of emotional responses, including the natural (if unbecoming) human desire to find someone or something to blame. Some of the blame has been directed at segments of the economic profession. It is the nature of some of these criticisms that I'd like to talk about today. ...
The dynamic general equilibrium (DGE) approach is the dominant methodology in macro today. I think this is so because of its power to organize thinking in a logically consistent manner, its ability to generate reasonable conditional forecasts, as well as its great flexibility--a property that permits economists of all political persuasions to make use of the apparatus. ...

The point I want to make here is not that the DGE approach is the only way to go. I am not saying this at all. In fact, I personally believe in the coexistence of many different methodologies. The science of economics is not settled, after all. The point I am trying to make is that the DGE approach is not insensible (despite the claims of many critics who, I think, are sometimes driven by non-scientific concerns). ...

Once again (lest I be misunderstood, which I'm afraid seems unavoidable these days) I am not claiming that DGE is the be-all and end-all of macroeconomic theory. There is still a lot we do not know and I think it would be a good thing to draw on the insights offered by alternative approaches. I do not, however, buy into the accusation that there "too much math" in modern theory. Math is just a language. Most people do not understand this language and so they have a natural distrust of arguments written in it. .... Before criticizing, either learn the language or appeal to reliable translations...

As for the teaching of macroeconomics, if the crisis has led more professors to pay more attention to financial market frictions, then this is a welcome development. I also fall in the camp that stresses the desirability of teaching more economic history and placing greater emphasis on matching theory with data. ... Thus, one could reasonably expect a curriculum to be modified to include more history, history of thought, heterodox approaches, etc. But this is a far cry from calling for the abandonment of DGE theory. Do not blame the tools for how they were (or were not) used.

I've said a lot of what David says about modern macroeconomic models at one time or another in the past, for example it's not the tools of macroeconomics, it's how they are used. But I do think he leaves out one important factor, the need to ask the right question (and why we didn't prior to the crisis). This is from August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. We ... have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. ...

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices. ...

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

Here's an interesting quote from Thomas Sargent along the same lines:

The criticism of real business cycle models and their close cousins, the so-called New Keynesian models, is misdirected and reflects a misunderstanding of the purpose for which those models were devised.6 These models were designed to describe aggregate economic fluctuations during normal times when markets can bring borrowers and lenders together in orderly ways, not during financial crises and market breakdowns.

Which to me is another way of saying we didn't foresee the need to ask questions (and build models) that would be useful in a financial crisis -- we were focused on models that would explain "normal times" (which is connected to the fact that we thought the Great Moderation would continue due to arrogance on behalf of economists leading to the belief that modern policy tools, particularly from the Fed, would prevent major meltdowns, financial or otherwise). That is happening now, so we'll be much more prepared if history repeats itself, but I have to wonder what other questions we should be asking, but aren't.

Let me add one more thing (a few excerpts from a post in 2010) about the sociology within economics:

I want to follow up on the post highlighting attempts to attack the messengers -- attempts to discredit Brad DeLong and Paul Krugman on macroeconomic policy in particular -- rather than engage academically with the message they are delivering (Krugman's response). ...
One of the objections often raised is that Krugman and DeLong are not, strictly speaking, macroeconomists. But if Krugman, DeLong, and others are expressing the theoretical and empirical results concerning macroeconomic policy accurately, does it really matter if we can strictly classify them as macroeconomists? Why is that important except as an attempt to discredit the message they are delivering? ... Attacking people rather than discussing ideas avoids even engaging on the issues. And when it comes to the ideas -- here I am talking most about fiscal policy -- as I've already noted in the previous post, the types of policies Krugman, DeLong, and others have advocated (and I should include myself as well) can be fully supported using modern macroeconomic models. ...
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.

Tuesday, April 07, 2015

In Search of Better Macroeconomic Models

I have a new column:

In Search of Better Macroeconomic Models: Modern macroeconomic models did not perform well during the Great Recession. What needs to be done to fix them? Can the existing models be patched up, or are brand new models needed? ...

It's mostly about the recent debate on whether we need microfoundations in macroeconomics.

Saturday, April 04, 2015

'Do not Underestimate the Power of Microfoundations'

Simon Wren-Lewis takes a shot at answering Brad DeLong's question about microfoundations:

Do not underestimate the power of microfoundations: Brad DeLong asks why the New Keynesian (NK) model, which was originally put forth as simply a means of demonstrating how sticky prices within an RBC framework could produce Keynesian effects, has managed to become the workhorse of modern macro, despite its many empirical deficiencies. ... Brad says his question is closely related to the “question of why models that are microfounded in ways we know to be wrong are preferable in the discourse to models that try to get the aggregate emergent properties right.”...
Why are microfounded models so dominant? From my perspective this is a methodological question, about the relative importance of ‘internal’ (theoretical) versus ‘external’ (empirical) consistency. ...
 I would argue that the New Classical (counter) revolution was essentially a methodological revolution. However..., it will be a struggle to get macroeconomists below a certain age to admit this is a methodological issue. Instead they view microfoundations as just putting right inadequacies with what went before.
So, for example, you will be told that internal consistency is clearly an essential feature of any model, even if it is achieved by abandoning external consistency. ... In essence, many macroeconomists today are blind to the fact that adopting microfoundations is a methodological choice, rather than simply a means of correcting the errors of the past.
I think this has two implications for those who want to question the microfoundations hegemony. The first is that the discussion needs to be about methodology, rather than individual models. Deficiencies with particular microfounded models, like the NK model, are generally well understood, and from a microfoundations point of view simply provide an agenda for more research. Second, lack of familiarity with methodology means that this discussion cannot presume knowledge that is not there. ... That makes discussion difficult, but I’m not sure it makes it impossible.

Wednesday, March 25, 2015

'Anti-Keynesian Delusions'

Paul Krugman continues the discussion on the use of the Keynesian model:

Anti-Keynesian Delusions: I forgot to congratulate Mark Thoma on his tenth blogoversary, so let me do that now. ...
Today Mark includes a link to one of his own columns, a characteristically polite and cool-headed response to the latest salvo from David K. Levine. Brad DeLong has also weighed in, less politely.
I’d like to weigh in with a more general piece of impoliteness, and note a strong empirical regularity in this whole area. Namely, whenever someone steps up to declare that Keynesian economics is logically and empirically flawed, has been proved wrong and refuted, you know what comes next: a series of logical and empirical howlers — crude errors of reasoning, assertions of fact that can be checked and rejected in a minute or two.
Levine doesn’t disappoint. ...

He goes on to explain in detail.

Update: Brad DeLong also comments.

Tuesday, March 24, 2015

'Macro Wars: The Attack of the Anti-Keynesians'

I have a new column:

Macro Wars: The Attack of the Anti-Keynesians, by Mark Thoma: The ongoing war between the Keynesians and the anti-Keynesians appears to be heating up again. The catalyst for this round of fighting is The Keynesian Illusion by David K. Levine, which elicited responses such as this and this from Brad DeLong and Nick Rowe.
The debate is about the source of economic fluctuations and the government’s ability to counteract them with monetary and fiscal policy. One of the issues is the use of “old fashioned” Keynesian models – models that have supposedly been rejected by macroeconomists in favor of modern macroeconomic models – to explain and understand the Great Recession and to make monetary and fiscal policy recommendations. As Levine says, “Robert Lucas, Edward Prescott, and Thomas Sargent … rejected Keynesianism because it doesn't work… As it happens we have developed much better theories…”
I believe the use of “old-fashioned” Keynesian models to analyze the Great Recession can be defended. ...

Wednesday, March 18, 2015

'Is the Walrasian Auctioneer Microfounded?'

Simon Wren-Lewis (he says this is "For macroeconomists"):

Is the Walrasian Auctioneer microfounded?: I found this broadside against Keynesian economics by David K. Levine interesting. It is clear at the end that he is child of the New Classical revolution. Before this revolution he was far from ignorant of Keynesian ideas. He adds: “Knowledge of Keynesianism and Keynesian models is even deeper for the great Nobel Prize winners who pioneered modern macroeconomics - a macroeconomics with people who buy and sell things, who save and invest - Robert Lucas, Edward Prescott, and Thomas Sargent among others. They also grew up with Keynesian theory as orthodoxy - more so than I. And we rejected Keynesianism because it doesn't work not because of some aesthetic sense that the theory is insufficiently elegant.”
The idea is familiar: New Classical economists do things properly, by founding their analysis in the microeconomics of individual production, savings and investment decisions. [2] It is no surprise therefore that many of today’s exponents of this tradition view their endeavour as a natural extension of the Walrasian General Equilibrium approach associated with Arrow, Debreu and McKenzie. But there is one agent in that tradition that is as far from microfoundations as you can get: the Walrasian auctioneer. It is this auctioneer, and not people, who typically sets prices. ...
Now your basic New Keynesian model contains a huge number of things that remain unrealistic or are just absent. However I have always found it extraordinary that some New Classical economists declare such models as lacking firm microfoundations, when these models at least try to make up for one area where RBC models lack any microfoundations at all, which is price setting. A clear case of the pot calling the kettle black! I have never understood why New Keynesians can be so defensive about their modelling of price setting. Their response every time should be ‘well at least it’s better than assuming an intertemporal auctioneer’.[1] ...
As to the last sentence in the quote from Levine above, I have talked before about the assertion that Keynesian economics did not work, and the implication that RBC models work better. He does not talk about central banks, or monetary policy. If he had, he would have to explain why most of the people working for them seem to believe that New Keynesian type models are helpful in their job of managing the economy. Perhaps these things are not mentioned because it is so much easier to stay living in the 1980s, in those glorious days (for some) when it appeared as if Keynesian economics had been defeated for good.

Saturday, March 14, 2015

'John and Maynard’s Excellent Adventure'

Paul Krugman defends IS-LM analysis (I'd make one qualification. Models are built to answer specific questions, we do not have one grand unifying model to use for all questions. IS-LM models were built to answer exactly the kinds of questions we encountered during the Great Recession, and the IS-LM model provided good answers (especially if one remembers where the model encounters difficulties). DSGE models were built to address other issues, and it's not surprising they didn't do very well when they were pushed to address questions they weren't designed to answer. The best model to use depends upon the question one is asking):

John and Maynard’s Excellent Adventure: When I tell people that macroeconomic analysis has been triumphantly successful in recent years, I tend to get strange looks. After all, wasn’t everyone predicting lots of inflation? Didn’t policymakers get it all wrong? Haven’t the academic economists been squabbling nonstop?
Well, as a card-carrying economist I disavow any responsibility for Rick Santelli and Larry Kudlow; I similarly declare that Paul Ryan and Olli Rehn aren’t my fault. As for the economists’ disputes, well, let me get to that in a bit.
I stand by my claim, however. The basic macroeconomic framework that we all should have turned to, the framework that is still there in most textbooks, performed spectacularly well: it made strong predictions that people who didn’t know that framework found completely implausible, and those predictions were vindicated. And the framework in question – basically John Hicks’s interpretation of John Maynard Keynes – was very much the natural way to think about the issues facing advanced countries after 2008. ...
I call this a huge success story – one of the best examples in the history of economics of getting things right in an unprecedented environment.
The sad thing, of course, is that this incredibly successful analysis didn’t have much favorable impact on actual policy. Mainly that’s because the Very Serious People are too serious to play around with little models; they prefer to rely on their sense of what markets demand, which they continue to consider infallible despite having been wrong about everything. But it also didn’t help that so many economists also rejected what should have been obvious.
Why? Many never learned simple macro models – if it doesn’t involve microfoundations and rational expectations, preferably with difficult math, it must be nonsense. (Curiously, economists in that camp have also proved extremely prone to basic errors of logic, probably because they have never learned to work through simple stories.) Others, for what looks like political reasons, seemed determined to come up with some reason, any reason, to be against expansionary monetary and fiscal policy.
But that’s their problem. From where I sit, the past six years have been hugely reassuring from an intellectual point of view. The basic model works; we really do know what we’re talking about.

[The original is quite a bit longer.]

Thursday, March 05, 2015

'Economists' Biggest Failure'

Noah Smith:

Economists' Biggest Failure: One of the biggest things that economists get grief about is their failure to predict big events like recessions. ... 
Pointing this out usually leads to the eternal (and eternally fun) debate over whether economics is a real science. The profession's detractors say that if you don’t make successful predictions, you aren’t a science. Economists will respond that seismologists can’t forecast earthquakes, and meteorologists can’t forecast hurricanes, and who cares what’s really a “science” anyway. 
The debate, however, misses the point. Forecasts aren’t the only kind of predictions a science can make. In fact, they’re not even the most important kind. 
Take physics for example. Sometimes physicists do make forecasts -- for example, eclipses. But those are the exception. Usually, when you make a new physics theory, you use it to predict some new phenomenon... For example, quantum mechanics has gained a lot of support from predicting the strange new things like quantum tunneling or quantum teleportation.
Other times, a theory will predict things we have seen before, but will describe them in terms of other things that we thought were totally separate, unrelated phenomena. This is called unification, and it’s a key part of what philosophers think science does. For example, the theory of electromagnetism says that light, electric current, magnetism, radio waves are all really the same phenomenon. Pretty neat! ...
So that’s physics. What about economics? Actually, econ has a number of these successes too. When Dan McFadden used his Random Utility Model to predict how many people would ride San Francisco's Bay Area Rapid Transit system,... he got it right. And he got many other things right with the same theory -- it wasn’t developed to explain only train ridership. 
Unfortunately, though, this kind of success isn't very highly regarded in the economics world... Maybe now, with the ascendance of empirical economics and a decline in theory, we’ll see a focus on producing fewer but better theories, more unification, and more attempts to make novel predictions. Someday, maybe macroeconomists will even be able to make forecasts! But let’s not get our hopes up.

I've addressed this question many times, e.g. in 2009, and to me the distinction is between forecasting the future, and understanding why certain phenomena occur (re-reading, it's a bit repetitive):

Are Macroeconomic Models Useful?: There has been no shortage of effort devoted to predicting earthquakes, yet we still can't see them coming far enough in advance to move people to safety. When a big earthquake hits, it is a surprise. We may be able to look at the data after the fact and see that certain stresses were building, so it looks like we should have known an earthquake was going to occur at any moment, but these sorts of retrospective analyses have not allowed us to predict the next one. The exact timing and location is always a surprise.
Does that mean that science has failed? Should we criticize the models as useless?
No. There are two uses of models. One is to understand how the world works, another is to make predictions about the future. We may never be able to predict earthquakes far enough in advance and with enough specificity to allow us time to move to safety before they occur, but that doesn't prevent us from understanding the science underlying earthquakes. Perhaps as our understanding increases prediction will be possible, and for that reason scientists shouldn't give up trying to improve their models, but for now we simply cannot predict the arrival of earthquakes.
However, even though earthquakes cannot be predicted, at least not yet, it would be wrong to conclude that science has nothing to offer. First, understanding how earthquakes occur can help us design buildings and make other changes to limit the damage even if we don't know exactly when an earthquake will occur. Second, if an earthquake happens and, despite our best efforts to insulate against it there are still substantial consequences, science can help us to offset and limit the damage. To name just one example, the science surrounding disease transmission helps use to avoid contaminated water supplies after a disaster, something that often compounds tragedy when this science is not available. But there are lots of other things we can do as well, including using the models to determine where help is most needed.
So even if we cannot predict earthquakes, and we can't, the models are still useful for understanding how earthquakes happen. This understanding is valuable because it helps us to prepare for disasters in advance, and to determine policies that will minimize their impact after they happen.
All of this can be applied to macroeconomics. Whether or not we should have predicted the financial earthquake is a question that has been debated extensively, so I am going to set that aside. One side says financial market price changes, like earthquakes, are inherently unpredictable -- we will never predict them no matter how good our models get (the efficient markets types). The other side says the stresses that were building were obvious. Like the stresses that build when tectonic plates moving in opposite directions rub against each other, it was only a question of when, not if. (But even when increasing stress between two plates is observable, scientists cannot tell you for sure if a series of small earthquakes will relieve the stress and do little harm, or if there will be one big adjustment that relieves the stress all at once. With respect to the financial crisis, economists expected lots of little, small harm causing adjustments, instead we got the "big one," and the "buildings and other structures" we thought could withstand the shock all came crumbling down. On prediction in economics, perhaps someday improved models will allow us to do better than we have so far at predicting the exact timing of crises, and I think that earthquakes provide some guidance here. You have to ask first if stress is building in a particular sector, and then ask if action needs to be taken because the stress has reached dangerous levels, levels that might result in a big crash rather than a series of small stress relieving adjustments. I don't think our models are very good at detecting accumulating stress...
Whether the financial crisis should have been predicted or not, the fact that it wasn't predicted does not mean that macroeconomic models are useless any more than the failure to predict earthquakes implies that earthquake science is useless. As with earthquakes, even when prediction is not possible (or missed), the models can still help us to understand how these shocks occur. That understanding is useful for getting ready for the next shock, or even preventing it, and for minimizing the consequences of shocks that do occur. 
But we have done much better at dealing with the consequences of unexpected shocks ex-post than we have at getting ready for these a priori. Our equivalent of getting buildings ready for an earthquake before it happens is to use changes in institutions and regulations to insulate the financial sector and the larger economy from the negative consequences of financial and other shocks. Here I think economists made mistakes - our "buildings" were not strong enough to withstand the earthquake that hit. We could argue that the shock was so big that no amount of reasonable advance preparation would have stopped the "building" from collapsing, but I think it's more the case that enough time has passed since the last big financial earthquake that we forgot what we needed to do. We allowed new buildings to be constructed without the proper safeguards.
However, that doesn't mean the models themselves were useless. The models were there and could have provided guidance, but the implied "building codes" were ignored. Greenspan and others assumed no private builder would ever construct a building that couldn't withstand an earthquake, the market would force them to take this into consideration. But they were wrong about that, and even Greenspan now admits that government building codes are necessary. It wasn't the models, it was how they were used (or rather not used) that prevented us from putting safeguards into place.
We haven't failed at this entirely though. For example, we have had some success at putting safeguards into place before shocks occur, automatic stabilizers have done a lot to insulate against the negative consequences of the recession (though they could have been larger to stop the building from swaying as much as it has). So it's not proper to say that our models have not helped us to prepare in advance at all, the insulation social insurance programs provide is extremely important to recognize. But it is the case that we could have and should have done better at preparing before the shock hit.
I'd argue that our most successful use of models has been in cleaning up after shocks rather than predicting, preventing, or insulating against them through pre-crisis preparation. When despite our best effort to prevent it or to minimize its impact a priori, we get a recession anyway, we can use our models as a guide to monetary, fiscal, and other policies that help to reduce the consequences of the shock (this is the equivalent of, after a disaster hits, making sure that the water is safe to drink, people have food to eat, there is a plan for rebuilding quickly and efficiently, etc.). As noted above, we haven't done a very good job at predicting big crises, and we could have done a much better job at implementing regulatory and institutional changes that prevent or limit the impact of shocks. But we do a pretty good job of stepping in with policy actions that minimize the impact of shocks after they occur. This recession was bad, but it wasn't another Great Depression like it might have been without policy intervention.
Whether or not we will ever be able to predict recessions reliably, it's important to recognize that our models still provide considerable guidance for actions we can take before and after large shocks that minimize their impact and maybe even prevent them altogether (though we will have to do a better job of listening to what the models have to say). Prediction is important, but it's not the only use of models.

Wednesday, December 17, 2014

'Minimal Model Explanations'

Some of you might find this interesting:

“Minimal Model Explanations,” R.W. Batterman & C.C. Rice (2014), A Fine Theorem: I unfortunately was overseas and wasn’t able to attend the recent Stanford conference on Causality in the Social Sciences; a friend organized the event and was able to put together a really incredible set of speakers: Nancy Cartwright, Chuck Manski, Joshua Angrist, Garth Saloner and many others. Coincidentally, a recent issue of the journal Philosophy of Science had an interesting article quite relevant to economists interested in methodology: how is it that we learn anything about the world when we use a model that is based on false assumptions? ...

Sunday, December 14, 2014

Real Business Cycle Theory

Roger Farmer:

Real business cycle theory and the high school Olympics: I have lost count of the number of times I have heard students and faculty repeat the idea in seminars, that “all models are wrong”. This aphorism, attributed to George Box,  is the battle cry  of the Minnesota calibrator, a breed of macroeconomist, inspired by Ed Prescott, one of the most important and influential economists of the last century.
Of course all models are wrong. That is trivially true: it is the definition of a model. But the cry  has been used for three decades to poke fun at attempts to use serious econometric methods to analyze time series data. Time series methods were inconvenient to the nascent Real Business Cycle Program that Ed pioneered because the models that he favored were, and still are, overwhelmingly rejected by the facts. That is inconvenient. Ed’s response was pure genius. If the model and the data are in conflict, the data must be wrong. ...

After explaining, he concludes:

We don't have to play by Ed's rules. We can use the methods developed by Rob Engle and Clive Granger as I have done here. Once we allow aggregate demand to influence permanently the unemployment rate, the data do not look kindly on either real business cycle models or on the new-Keynesian approach. It's time to get serious about macroeconomic science...

Thursday, November 27, 2014

MarkSpeaks

Simon Wren-Lewis:

As Mark Thoma often says, the problem is with macroeconomists rather than macroeconomics.

Much, much more here.

Tuesday, November 18, 2014

How Piketty Has Changed Economics

I have a new column:

How Piketty Has Changed Economics: Thomas Piketty’s Capital in the Twenty-First Century is beginning to receive book of the year awards, but has it changed anything within economics? There are two ways in which is has...

I'm not sure everyone will agree that the changes will persist. [This is a long-run view that begins with Adam Smith and looks for similarities between the past and today.]

Update: Let me add that although many people believe that the most important questions in the future will be about production (as it was in Smith's time), secular stagnation, robots, etc., I believe we will have enough "stuff", the big questions will be about distribution (as it was when Ricardo, Marx, etc. were writing).

Thursday, October 23, 2014

'How Mainstream Economic Thinking Imperils America'

Tuesday, October 21, 2014

'Why our Happiness and Satisfaction Should Replace GDP in Policy Making'

Richard Easterlin:

Why our happiness and satisfaction should replace GDP in policy making, The Conversation: Since 1990, GDP per person in China has doubled and then redoubled. With average incomes multiplying fourfold in little more than two decades, one might expect many of the Chinese people to be dancing in the streets. Yet, when asked about their satisfaction with life, they are, if anything, less satisfied than in 1990.
The disparity indicated by these two measures of human progress, Gross Domestic Product and Subjective Well Being (SWB), makes pretty plain the issue at hand. GDP, the well-being indicator commonly used in policy circles, signals an outstanding advance in China. SWB, as indicated by self-reports of overall satisfaction with life, suggests, if anything, a worsening of people’s lives. Which measure is a more meaningful index of well-being? Which is a better guide for public policy?
A few decades ago, economists – the most influential social scientists shaping public policy – would have said that the SWB result for China demonstrates the meaninglessness of self-reports of well-being. Economic historian Deirdre McCloskey, writing in 1983, aptly put the typical attitude of economists this way:
Unlike other social scientists, economists are extremely hostile towards questionnaires and other self-descriptions… One can literally get an audience of economists to laugh out loud by proposing ironically to send out a questionnaire on some disputed economic point. Economists… are unthinkingly committed to the notion that only the externally observable behaviour of actors is admissible evidence in arguments concerning economics.
Culture clash
But times have changed. A commission established by the then French president, Nicolas Sarkozy in 2008 and charged with recommending alternatives to GDP as a measure of progress, stated bluntly (my emphasis):
Research has shown that it is possible to collect meaningful and reliable data on subjective as well as objective well-being … The types of questions that have proved their value within small-scale and unofficial surveys should be included in larger-scale surveys undertaken by official statistical offices.
This 25-member commission was comprised almost entirely of economists, five of whom had won the Nobel Prize in economics. Two of the five co-chaired the commission.
These days the tendency with new measures of our well-being – such as life satisfaction and happiness – is to propose that they be used as a complement to GDP. But what is one to do when confronted with such a stark difference between SWB and GDP, as in China? What should one say? People in China are better off than ever before, people are no better off than before, or “it depends”?
Commonalities
To decide this issue, we need to delve deeper into what has happened in China. When we do that, the superiority of SWB becomes apparent: it can capture the multiple dimensions of people’s lives. GDP, in contrast, focuses exclusively on the output of material goods.
People everywhere in the world spend most of their time trying to earn a living and raise a healthy family. The easier it is for them to do this, the happier they are. This is the lesson of a 1965 classic, The Pattern of Human Concerns, by public opinion survey specialist Hadley Cantril. In the 12 countries – rich and poor, communist and non-communist – that Cantril surveyed, the same highly personal concerns dominated determinants of happiness: standard of living, family, health and work. Broad social issues such as inequality, discrimination and international relations, were rarely mentioned.
Urban China in 1990 was essentially a mini-welfare state. Workers had what has been called an “iron rice bowl” – they were assured of jobs, housing, medical services, pensions, childcare and jobs for their grown children.
With the coming of capitalism, and “restructuring” of state enterprises, the iron rice bowl was smashed and these assurances went out the window. Unemployment soared and the social safety net disappeared. The security that workers had enjoyed was gone and the result was that life satisfaction plummeted, especially among the less-educated, lower-income segments of the population.
Although working conditions have improved somewhat in the past decade, the shortfall from the security enjoyed in 1990 remains substantial. The positive effect on well-being of rising incomes has been negated by rapidly rising material aspirations and the emergence of urgent concerns about income and job security, family, and health.
The case to replace
Examples of the disparity between SWB and GDP as measures of well-being could easily be multiplied. Since the early 1970s real GDP per capita in the US has doubled, but SWB has, if anything, declined. In international comparisons, Costa Rica’s per capita GDP is a quarter of that in the US, but Costa Ricans are as happy or happier than Americans when we look at SWB data. Clearly there is more to people’s well-being that the output of goods.
There are some simple, yet powerful arguments to say that we should use SWB in preference to GDP, not just as a complement. For a start, those SWB measures like happiness or life satisfaction are more comprehensive than GDP. They take into account the effect on well-being not just of material living conditions, but of the wide range of concerns in our lives.
It is also key that with SWB, the judgement of well-being is made by the individuals affected. GDP’s reliance on outside statistical “experts” to make inferences based on a measure they themselves construct looks deeply flawed when viewed in comparison. These judgements by outsiders also lie behind the growing number of multiple-item measures being put forth these days. An example is the United Nations’ Human Development Index (HDI) which attempts to combine data on GDP with indexes of education and life expectancy.
But people do not identify with measures like HDI (or GDP, of course) to anywhere near the extent that they do with straightforward questions of happiness and satisfaction with life. And crucially, these SWB measures offer each adult a vote and only one vote, whether they are rich or poor, sick or well, native or foreign-born. This is not to say that, as measures of well-being go, SWB is the last word, but clearly it comes closer to capturing what is actually happening to people’s lives than GDP ever will. The question is whether policy makers actually want to know.

Thursday, October 16, 2014

'Regret and Economic Decision-Making'

Here are the conclusions to Regret and economic decision-making:

Conclusions We are clearly a long way from fully understanding how people behave in dynamic contexts. But our experimental data and that of earlier studies (Lohrenz 2007) suggest that regret is a part of the decision process and should not be overlooked. From a theoretical perspective, our work shows that regret aversion and counterfactual thinking make subtle predictions about behaviour in settings where past events serve as benchmarks. They are most vividly illustrated in the investment context.
Our theoretical findings show that if regret is anticipated, investors may keep their hands off risky investments, such as stocks, and not enter the market in the first place. Thus, anticipated regret aversion acts like a surrogate for higher risk aversion.
In contrast, once people have invested, they become very attached to their investment. Moreover, the better past performance was, the higher their commitment, because losses loom larger. This leads the investor to ‘gamble for resurrection’. In our experimental data, we very often observe exactly this pattern.
This dichotomy between ex ante and ex post risk appetites can be harmful for investors. It leads investors and businesses to escalate their commitment because of the sunk costs in their investments. For example, many investors missed out on the 2009 stock market rally while buckling down in the crash in 2007/2008, reluctant to sell early. Similarly, people who quit their jobs and invested their savings into their own business, often cannot with a cold, clear eye cut their losses and admit their business has failed.
Therefore, a better understanding of what motivates people to save and invest could enable us to help them avoid such mistakes, e.g. through educating people to set up clear budgets a priori or to impose a drop dead level for their losses. Such simple measures may help mitigate the effects of harmful emotional attachment and support individuals in making better decisions.

[This ("once people have invested, they become very attached to their investment" and cannot admit failure) includes investment in economic models and research (see previous post).]

Tuesday, October 14, 2014

Economics is Both Positive and Normative

Jean Tirole in the latest TSE Magazine:

Economics is a positive discipline as it aims to document and analyse individual and collective behaviours. It is also, and more importantly, a normative discipline as its main goal is to better the world through economic policies and recommendations.

Friday, September 26, 2014

'The New Classical Clique'

Paul Krugman continues the conversation on New Classical economics::

The New Classical Clique: Simon Wren-Lewis thinks some more about macroeconomics gone astray; Robert J. Waldmann weighs in. For those new to this conversation, the question is why starting in the 1970s much of academic macroeconomics was taken over by a school of thought that began by denying any useful role for policies to raise demand in a slump, and eventually coalesced around denial that the demand side of the economy has any role in causing slumps.
I was a grad student and then an assistant professor as this was happening, albeit doing international economics – and international macro went in a different direction, for reasons I’ll get to in a bit. So I have some sense of what was really going on. And while both Wren-Lewis and Waldmann hit on most of the main points, neither I think gets at the important role of personal self-interest. New classical macro was and still is many things – an ideological bludgeon against liberals, a showcase for fancy math, a haven for people who want some kind of intellectual purity in a messy world. But it’s also a self-promoting clique. ...

Wednesday, September 24, 2014

Where and When Macroeconomics Went Wrong

Simon Wren-Lewis:

Where macroeconomics went wrong: In my view, the answer is in the 1970/80s with the New Classical revolution (NCR). However I also think the new ideas that came with that revolution were progressive. I have defended rational expectations, I think intertemporal theory is the right place to start in thinking about consumption, and exploring the implications of time inconsistency is very important to macro policy, as well as many other areas of economics. I also think, along with nearly all macroeconomists, that the microfoundations approach to macro (DSGE models) is a progressive research strategy.
That is why discussion about these issues can become so confused. New Classical economics made academic macroeconomics take a number of big steps forward, but a couple of big steps backward at the same time. The clue to the backward steps comes from the name NCR. The research program was anti-Keynesian (hence New Classical), and it did not want microfounded macro to be an alternative to the then dominant existing methodology, it wanted to replace it (hence revolution). Because the revolution succeeded (although the victory over Keynesian ideas was temporary), generations of students were taught that Keynesian economics was out of date. They were not taught about the pros and cons of the old and new methodologies, but were taught that the old methodology was simply wrong. And that teaching was/is a problem because it itself is wrong. ...

Thursday, September 11, 2014

'Trapped in the ''Dark Corners'''?

A small part of Brad DeLong's response to Olivier Blanchard. I posted a shortened version of Blanchard's argument a week or two ago:

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

And I responded:

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Here's part of Brad's response:

But… but… but… Macroeconomic policy and financial regulation are not set in such a way as to maintain a healthy distance from dark corners. We are still in a dark corner now. There is no sign of the 4% per year inflation target, the commitments to do what it takes via quantitative easing and rate guidance to attain it, or a fiscal policy that recognizes how the rules of the game are different for reserve currency printing sovereigns when r < n+g. Thus not only are we still in a dark corner, but there is every reason to believe that should we get out the sub-2% per year effective inflation targets of North Atlantic central banks and the inappropriate rhetoric and groupthink surrounding fiscal policy makes it highly likely that we will soon get back into yet another dark corner. Blanchard’s pragmatic answer is thus the most unpragmatic thing imaginable: the “if” test fails, and so the “then” part of the argument seems to me to be simply inoperative. Perhaps on another planet in which North Atlantic central banks and governments aggressively pursued 6% per year nominal GDP growth targets Blanchard’s answer would be “pragmatic”. But we are not on that planet, are we?

Moreover, even were we on Planet Pragmatic, it still seems to be wrong. Using current or any visible future DSGE models for forecasting and mainstream scenario planning makes no sense: the DSGE framework imposes restrictions on the allowable emergent properties of the aggregate time series that are routinely rejected at whatever level of frequentist statistical confidence that one cares to specify. The right road is that of Christopher Sims: that of forecasting and scenario planning using relatively instructured time-series methods that use rather than ignore the correlations in the recent historical data. And for policy evaluation? One should take the historical correlations and argue why reverse-causation and errors-in-variables lead them to underestimate or overestimate policy effects, and possibly get it right. One should not impose a structural DSGE model that identifies the effects of policies but certainly gets it wrong. Sims won that argument. Why do so few people recognize his victory?

Blanchard continues:

Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage…

For the second task, the question is: whose models of tail risk based on what traditions get to count in the tail risks discussion?

And missing is the third task: understanding what Paul Krugman calls the “Dark Age of macroeconomics”, that jahiliyyah that descended on so much of the economic research, economic policy analysis, and economic policymaking communities starting in the fall of 2007, and in which the center of gravity of our economic policymakers still dwell.

Sunday, August 31, 2014

'Where Danger Lurks'

Olivier Blanchard (a much shortened version of his arguments, the entire piece is worth reading):

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Tuesday, August 26, 2014

A Reason to Question the Official Unemployment Rate

[Still on the road ... three quick ones before another long day of driving.]

David Leonhardt:

A New Reason to Question the Official Unemployment Rate: ...A new academic paper suggests that the unemployment rate appears to have become less accurate over the last two decades, in part because of this rise in nonresponse. In particular, there seems to have been an increase in the number of people who once would have qualified as officially unemployed and today are considered out of the labor force, neither working nor looking for work.
The trend obviously matters for its own sake: It suggests that the official unemployment rate – 6.2 percent in July – understates the extent of economic pain in the country today. ... The new paper is a reminder that the unemployment rate deserves less attention than it often receives.
Yet the research also relates to a larger phenomenon. The declining response rate to surveys of almost all kinds is among the biggest problems in the social sciences. ...
Why are people less willing to respond? The rise of caller ID and the decline of landlines play a role. But they’re not the only reasons. Americans’ trust in institutions – including government, the media, churches, banks, labor unions and schools – has fallen in recent decades. People seem more dubious of a survey’s purpose and more worried about intrusions into their privacy than in the past.
“People are skeptical – Is this a real survey? What they are asking me?” Francis Horvath, of the Labor Department, says. ...

Tuesday, August 19, 2014

The Agent-Based Method

Rajiv Sethi:

The Agent-Based Method: It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.

So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 

Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 

---

The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required. 

Wednesday, August 13, 2014

'Unemployment Fluctuations are Mainly Driven by Aggregate Demand Shocks'

Do the facts have a Keynesian bias?:

Using product- and labour-market tightness to understand unemployment, by Pascal Michaillat and Emmanuel Saez, Vox EU: For the five years from December 2008 to November 2013, the US unemployment rate remained above 7%, peaking at 10% in October 2009. This period of high unemployment is not well understood. Macroeconomists have proposed a number of explanations for the extent and persistence of unemployment during the period, including:

  • High mismatch caused by major shocks to the financial and housing sectors,
  • Low search effort from unemployed workers triggered by long extensions of unemployment insurance benefits, and
  • Low aggregate demand caused by a sudden need to repay debts or pessimism, but no consensus has been reached.

In our opinion this lack of consensus is due to a gap in macroeconomic theory: we do not have a model that is rich enough to account for the many factors driving unemployment – including aggregate demand – and simple enough to lend itself to pencil-and-paper analysis. ...

In Michaillat and Saez (2014), we develop a new model of unemployment fluctuations to inspect the mechanisms behind unemployment fluctuations. The model can be seen as an equilibrium version of the Barro-Grossman model. It retains the architecture of the Barro-Grossman model but replaces the disequilibrium framework on the product and labour markets with an equilibrium matching framework. ...

Through the lens of our simple model, the empirical evidence suggests that price and real wage are somewhat rigid, and that unemployment fluctuations are mainly driven by aggregate demand shocks.

Tuesday, August 12, 2014

Why Do Macroeconomists Disagree?

I have a new column:

Why Do Macroeconomists Disagree?, by Mark Thoma, The Fiscal Times: On August 9, 2007, the French Bank BNP Paribus halted redemptions to three investment funds active in US mortgage markets due to severe liquidity problems, an event that many mark as the beginning of the financial crisis. Now, just over seven years later, economists still can’t agree on what caused the crisis, why it was so severe, and why the recovery has been so slow. We can’t even agree on the extent to which modern macroeconomic models failed, or if they failed at all.
The lack of a consensus within the profession on the economics of the Great Recession, one of the most significant economic events in recent memory, provides a window into the state of macroeconomics as a science. ...

Monday, August 11, 2014

'Inflation in the Great Recession and New Keynesian Models'

From the NY Fed's Liberty Street Economics:

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc Giannoni, Raiden Hasegawa, and Frank Schorfheide: Since the financial crisis of 2007-08 and the Great Recession, many commentators have been baffled by the “missing deflation” in the face of a large and persistent amount of slack in the economy. Some prominent academics have argued that existing models cannot properly account for the evolution of inflation during and following the crisis. For example, in his American Economic Association presidential address, Robert E. Hall called for a fundamental reconsideration of Phillips curve models and their modern incarnation—so-called dynamic stochastic general equilibrium (DSGE) models—in which inflation depends on a measure of slack in economic activity. The argument is that such theories should have predicted more and more disinflation as long as the unemployment rate remained above a natural rate of, say, 6 percent. Since inflation declined somewhat in 2009, and then remained positive, Hall concludes that such theories based on a concept of slack must be wrong.        
In an NBER working paper and a New York Fed staff report (forthcoming in the American Economic Journal: Macroeconomics), we use a standard New Keynesian DSGE model with financial frictions to explain the behavior of output and inflation since the crisis. This model was estimated using data up to 2008. We find that following the increase in financial stress in 2008, the model successfully predicts not only the sharp contraction in economic activity, but also only a modest decline in inflation. ...

Wednesday, July 23, 2014

'Wall Street Skips Economics Class'

The discussion continues:

Wall Street Skips Economics Class, by Noah Smith: If you care at all about what academic macroeconomists are cooking up (or if you do any macro investing), you might want to check out the latest economics blog discussion about the big change that happened in the late '70s and early '80s. Here’s a post by the University of Chicago economist John Cochrane, and here’s one by Oxford’s Simon Wren-Lewis that includes links to most of the other contributions.
In case you don’t know the background, here’s the short version...

Saturday, July 19, 2014

'Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?'

Brad DeLong:

Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?: I have always understood expected-utility decision theory to be normative, not positive: it is how people ought to behave if they want to achieve their goals in risky environments, not how people do behave. One of the chief purposes of teaching expected-utility decision theory is in fact to make people aware that they really should be risk neutral over small gambles where they do know the probabilities--that they will be happier and achieve more of their goals in the long run if they in fact do so. ...[continue]...

Here's the bottom line:

(6) Given that people aren't rational Bayesian expected utility-theory decision makers, what do economists think that they are doing modeling markets as if they are populated by agents who are? Here there are, I think, three answers:

  • Most economists are clueless, and have not thought about these issues at all.

  • Some economists think that we have developed cognitive institutions and routines in organizations that make organizations expected-utility-theory decision makers even though the individuals in utility theory are not. (Yeah, right: I find this very amusing too.)

  • Some economists admit that the failure of individuals to follow expected-utility decision theory and our inability to build institutions that properly compensate for our cognitive biases (cough, actively-managed mutual funds, anyone?) are one of the major sources of market failure in the world today--for one thing, they blow the efficient market hypothesis in finance sky-high.

The fact that so few economists are in the third camp--and that any economists are in the second camp--makes me agree 100% with Andrew Gelman's strictures on economics as akin to Ptolemaic astronomy, in which the fundamentals of the model are "not [first-order] approximations to something real, they’re just fictions..."

Monday, July 14, 2014

Is There a Phillips Curve? If So, Which One?

One place that Paul Krugman and Chris House disagree is on the Phillips curve. Krugman (responding to a post by House) says:

New Keynesians do stuff like one-period-ahead price setting or Calvo pricing, in which prices are revised randomly. Practicing Keynesians have tended to rely on “accelerationist” Phillips curves in which unemployment determined the rate of change rather than the level of inflation.
So what has happened since 2008 is that both of these approaches have been found wanting: inflation has dropped, but stayed positive despite high unemployment. What the data actually look like is an old-fashioned non-expectations Phillips curve. And there are a couple of popular stories about why: downward wage rigidity even in the long run, anchored expectations.

House responds:

What the data actually look like is an old-fashioned non-expectations Phillips curve. 
OK, here is where we disagree. Certainly this is not true for the data overall. It seems like Paul is thinking that the system governing the relationship between inflation and output changes between something with essentially a vertical slope (a “Classical Phillips curve”) and a nearly flat slope (a “Keynesian Phillips Curve”). I doubt that this will fit the data particularly well and it would still seem to open the door to a large role for “supply shocks” – shocks that neither Paul nor I think play a big role in business cycles.

Simon Wren-Lewis also has something to say about this in his post from earlier today, Has the Great Recession killed the traditional Phillips Curve?:

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.
After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.
Which better describes the data? ...
[W]e can see why some ... studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.

I don't have much to add, except to say that this is an empirical question that will be difficult to resolve empirically (because there are so many different ways to estimate a Phillips curve, and different specifications give different answers, e.g. which measure of prices to use, which measure of aggregate activity to use, what time period to use and how to handle structural and policy breaks during the period that is chosen, how should natural rates be extracted from the data, how to handle non-stationarities, if we measure aggregate activity with the unemployment rate, do we exclude the long-term unemployed as recent research suggests, how many lags should be included, etc., etc.?).

Sunday, July 13, 2014

New Classical Economics as Modeling Strategy

Judy Klein emails a response to a recent post of mine based upon Simon Wren Lewis's post “Rereading Lucas and Sargent 1979”:

Lucas and Sargent’s, “After Keynesian Macroeconomics,” was presented at the 1978 Boston Federal Reserve Conference on “After the Phillips Curve: Persistence of High Inflation and High Unemployment.” Although the title of the conference dealt with stagflation, the rational expectations theorists saw themselves countering one technical revolution with another.

The Keynesian Revolution was, in the form in which it succeeded in the United States, a revolution in method. This was not Keynes’s intent, nor is it the view of all of his most eminent followers. Yet if one does not view the revolution in this way, it is impossible to account for some of its most important features: the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy. [Lucas and Sargent, 1979, pg. 50]

The Lucas papers at the Economists' Papers Project at the University of Duke reveal the preliminary planning for the 1978 presentation. Lucas and Sargent decided that it would be a “rhetorical piece… to convince others that the old-fashioned macro game is up…in a way which makes it clear that the difficulties are fatal”; it’s theme would be the “death of macroeconomics” and the desirability of replacing it with an “Aggregative Economics” whose foundation was “equilibrium theory.” (Lucas letter to Sargent February 9, 1978). Their 1978 presentation was replete, as their discussant Bob Solow pointed out, with the planned rhetorical barbs against Keynesian economics of “wildly incorrect," "fundamentally flawed," "wreckage," "failure," "fatal," "of no value," "dire implications," "failure on a grand scale," "spectacular recent failure," "no hope." The empirical backdrop to Lucas and Sargent’s death decree on Keynesian economics was evident in the subtitle of the conference: “Persistence of High Inflation and High Unemployment.”

Although they seized the opportunity to comment on policy failure and the high misery-index economy, Lucas and Sargent shifted the macroeconomic court of judgment from the economy to microeconomics. They fought a technical battle over the types of restrictions used by modelers to identify their structural models. Identification-rendering restrictions were essential to making both the Keynesian and rational expectations models “work” in policy applications, but Lucas and Sargent defined the ultimate terms of success not with regard to a model’s capacity for empirical explanation or achievement of a desirable policy outcome, but rather with regard to the model’s capacity to incorporate optimization and equilibrium – to aggregate consistently rational individuals and cleared markets.

In the macroeconomic history written by the victors, the Keynesian revolution and the rational expectations revolution were both technical revolutions, and one could delineate the sides of the battle line in the second revolution by the nature of the restricting assumptions that enabled the model identification that licensed policy prescription. The rational expectations revolution, however, was also a revolution in the prime referential framework for judging macroeconomic model fitness for going forth and multiplying; the consistency of the assumptions – the equation restrictions - with optimizing microeconomics and mathematical statistical theory, rather than end uses of explaining the economy and empirical statistics, constituted the new paramount selection criteria.

Some of the new classical macroeconomists have been explicit about the narrowness of their revolution. For example, Sargent noted in 2008, “While rational expectations is often thought of as a school of economic thought, it is better regarded as a ubiquitous modeling technique used widely throughout economics.” In an interview with Arjo Klamer in 1983, Robert Townsend asserted that “New classical economics means a modeling strategy.”

It is no coincidence, however, that in this modeling narrative of economic equilibrium crafted in the Cold War era, Adam Smith’s invisible hand morphs into a welfare-maximizing “hypothetical ‘benevolent social planner’” (Lucas, Prescott, Stokey 1989) enforcing a “communism of models” (Sargent 2007) and decreeing to individual agents the mutually consistent rules of action that become the equilibrating driving force. Indeed, a long-term Office of Naval Research grant for “Planning & Control of Industrial Operations” awarded to the Carnegie Institutes of Technology’s Graduate School of Industrial Administration had funded Herbert Simon’s articulation of his certainty equivalence theorem and John Muth’s study of rational expectations. It is ironic that a decade-long government planning contract employing Carnegie professors and graduate students underwrote the two key modeling strategies for the Nobel-prize winning demonstration that the rationality of consumers renders government intervention to increase employment unnecessary and harmful.

Friday, July 11, 2014

'Rereading Lucas and Sargent 1979'

Simon Wren-Lewis with a nice follow-up to an earlier discussion:

Rereading Lucas and Sargent 1979: Mainly for macroeconomists and those interested in macroeconomic thought
Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.
What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.
In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation...[continue]...

Friday, July 04, 2014

Responses to John Cochrane's Attack on New Keynesian Models

The opening quote from chapter 2 of Mankiw's intermediate macro textbook:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to fit facts. — Sherlock Holmes

Or, instead of "before one has data," change it to "It is a capital mistake to theorize without knowledge of the data" and it's a pretty good summary of Paul Krugman's response to John Cochrane:

Macro Debates and the Relevance of Intellectual History: One of the interesting things about the ongoing economic crisis is the way it has demonstrated the importance of historical knowledge. ... But it’s not just economic history that turns out to be extremely relevant; intellectual history — the history of economic thought — turns out to be relevant too.
Consider, in particular, the recent to-and-fro about stagflation and the rise of new classical macroeconomics. You might think that this is just economist navel-gazing; but you’d be wrong.
To see why, consider John Cochrane’s latest. ... Cochrane’s current argument ... effectively depends on the notion that there must have been very good reasons for the rejection of Keynesianism, and that harkening back to old ideas must involve some kind of intellectual regression. And that’s where it’s important — as David Glasner notes — to understand what really happened in the 70s.
The point is that the new classical revolution in macroeconomics was not a classic scientific revolution, in which an old theory failed crucial empirical tests and was supplanted by a new theory that did better. The rejection of Keynes was driven by a quest for purity, not an inability to explain the data — and when the new models clearly failed the test of experience, the new classicals just dug in deeper. They didn’t back down even when people like Chris Sims (pdf), using the very kinds of time-series methods they introduced, found that they strongly pointed to a demand-side model of economic fluctuations.
And critiques like Cochrane’s continue to show a curious lack of interest in evidence. ... In short, you have a much better sense of what’s really going on here, and which ideas remain relevant, if you know about the unhappy history of macroeconomic thought.

Nick Rowe:

Insufficient Demand vs?? Uncertainty: ...John Cochrane says: "John Taylor, Stanford's Nick Bloom and Chicago Booth's Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago's Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth." ...
Increased political uncertainty would reduce aggregate demand. Plus, positive feedback processes could amplify that initial reduction in aggregate demand. Even those who were not directly affected by that increased political uncertainty would reduce their own willingness to hire lend or invest because of that initial reduction in aggregate demand, plus their own uncertainty about aggregate demand. So the average person or firm might respond to a survey by saying that insufficient demand was the problem in their particular case, and not the political uncertainty which caused it.
But the demand-side problem could still be prevented by an appropriate monetary policy response. Sure, there would be supply-side effects too. And it would be very hard empirically to estimate the relative magnitudes of those demand-side vs supply-side effects. ...
So it's not just an either/or thing. Nor is it even a bit-of-one-plus-bit-of-the-other thing. Increased political uncertainty can cause a recession via its effect on demand. Unless monetary policy responds appropriately. (And that, of course, would mean targeting NGDP, because inflation targeting doesn't work when supply-side shocks cause adverse shifts in the Short Run Phillips Curve.)

On whether supply or demand shocks are the source of aggregate fluctuations, Blanchard and Quah (1989), Shapiro and Watson (1988), and others had it right (though the identifying restriction that aggregate demand shocks do not have permanent effects seems to be undermined by the Great Recession ). It's not an eithor/or question, it's a matter of figuring out how much of the variation in GDP/employment is due to supply shocks, and how much is due to demand shocks. And as Nick Rowe points out with his example, sorting between these two causes can be very difficult -- identifying which type of shock is driving changes in aggregate variables is not at all easy and depends upon particular assumptions. Nevertheless, my reading of the empirical evidence is much like Krugman's. Overall, across all these papers, it is demand shocks that play the most prominent role. Supply shocks do matter, but not nearly so much as demand shocks when it comes to explaining aggregate fluctuations.

Saturday, June 28, 2014

The Rise and Fall of the New Classical Model

Simon Wren-Lewis (my comments are at the end):

Understanding the New Classical revolution: In the account of the history of macroeconomic thought I gave here, the New Classical counter revolution was both methodological and ideological in nature. It was successful, I suggested, because too many economists were unhappy with the gulf between the methodology used in much of microeconomics, and the methodology of macroeconomics at the time.
There is a much simpler reading. Just as the original Keynesian revolution was caused by massive empirical failure (the Great Depression), the New Classical revolution was caused by the Keynesian failure of the 1970s: stagflation. An example of this reading is in this piece by the philosopher Alex Rosenberg (HT Diane Coyle). He writes: “Back then it was the New Classical macrotheory that gave the right answers and explained what the matter with the Keynesian models was.”
I just do not think that is right. Stagflation is very easily explained: you just need an ‘accelerationist’ Phillips curve (i.e. where the coefficient on expected inflation is one), plus a period in which monetary policymakers systematically underestimate the natural rate of unemployment. You do not need rational expectations, or any of the other innovations introduced by New Classical economists.
No doubt the inflation of the 1970s made the macroeconomic status quo unattractive. But I do not think the basic appeal of New Classical ideas lay in their better predictive ability. The attraction of rational expectations was not that it explained actual expectations data better than some form of adaptive scheme. Instead it just seemed more consistent with the general idea of rationality that economists used all the time. Ricardian Equivalence was not successful because the data revealed that tax cuts had no impact on consumption - in fact study after study have shown that tax cuts do have a significant impact on consumption.
Stagflation did not kill IS-LM. In fact, because empirical validity was so central to the methodology of macroeconomics at the time, it adapted to stagflation very quickly. This gave a boost to the policy of monetarism, but this used the same IS-LM framework. If you want to find the decisive event that led to New Classical economists winning their counterrevolution, it was the theoretical realisation that if expectations were rational, but inflation was described by an accelerationist Phillips curve with expectations about current inflation on the right hand side, then deviations from the natural rate had to be random. The fatal flaw in the Keynesian/Monetarist theory of the 1970s was theoretical rather than empirical.

I agree with this, so let me add to it by talking about what led to the end of the New Classical revolution (see here for a discussion of the properties of New Classical, New Keynesian, and Real Business Cycle Models). The biggest factor was empirical validity. Although some versions of the New Classical model allowed monetary non-neutrality (e.g. King 1982, JPE), when three factors are present, continuous market clearing, rational expectations, and the natural rate hypothesis, monetary neutrality is generally present in these models. Initially work from people like Barrow found strong support for the prediction of these models that only unanticipated changes in monetary policy can affect real variables like output, but subsequent work and eventually the weight of the evidence pointed in the other direction. Both expected and unexpected changes in the money supply appeared to matter in contrast to a key prediction of the New Classical framework.

A second factor that worked against New Classical models is that they had difficulty explaining both the duration and magnitude of actual business cycles. If the reaction to an unexpected policy shock was focused in a single period, the magnitude could be matched, but not the duration. If the shock was spread over 3-5 years to match the duration, the magnitude of cycles could not be matched. Movements in macroeconomic variables arising from informational errors (unexpected policy shocks) did not have enough "power" to capture both aspects of actual business cycles.

The other factor that worked against these models was that information problems were a key factor in generating swings in GDP and employment, and these variations were costly in aggregate. Yet no markets for information appeared to resolve this problem. For those who believe in the power of markets, and many proponents of New Classical models were also market fundamentalists, the lack of markets for information was a problem.

The New Classical model had displaced the Keynesian model for the reasons highlighted above, but the failure of the New Classical model left the door open for the New Keynesian model to emerge (it appeared to be more consistent with the empirical evidence on the effects of changes in the money supply, and in other areas as well, e.g. the correlation between productivity and economic activity).

But while the New Classical revolution was relatively short-lived as macro models go, it left two important legacies, rational expectations and microfoundations (as well as better knowledge about how non-neutralities might arise, in essence the New Keynesian model drops continuous market clearing through the assumption of short-run price rigidities, and about how to model information sets). Rightly or wrongly, all subsequent models had to have these two elements present within them (RE and microfoundaions), or they would be dismissed.

Thursday, June 26, 2014

Why DSGEs Crash During Crises

David Hendry and Grayham Mizon with an important point about DSGE models:

Why DSGEs crash during crises, by David F. Hendry and Grayham E. Mizon: Many central banks rely on dynamic stochastic general equilibrium models – known as DSGEs to cognoscenti. This column – which is more technical than most Vox columns – argues that the models’ mathematical basis fails when crises shift the underlying distributions of shocks. Specifically, the linchpin ‘law of iterated expectations’ fails, so economic analyses involving conditional expectations and inter-temporal derivations also fail. Like a fire station that automatically burns down whenever a big fire starts, DSGEs become unreliable when they are most needed.

Here's the introduction:

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.
Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.
The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models. ...[continue]...

Update: [nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances.

Tuesday, June 24, 2014

'Was the Neoclassical Synthesis Unstable?'

The last paragraph from a much longer argument by Simon Wren-Lewis:

Was the neoclassical synthesis unstable?: ... Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

Saturday, May 10, 2014

Replication in Economics

Thomas Kneib sent me the details of this project in early April after a discussion about it with one of his Ph.D. students (Jan Höffler) at the INET conference, and I've been meaning to post something on it but have been negligent. So I'm glad that Dave Giles picked up the slack:

Replication in Economics: I was pleased to receive an email today, alerting me to the "Replication in Economics" wiki at the University of Göttingen:

"My name is Jan H. Höffler, I have been working on a replication project funded by the Institute for New Economic Thinking during the last two years and found your blog that I find very interesting. I like very much that you link to data and code related to what you write about. I thought you might be interested in the following:

We developed a wiki website that serves as a database of empirical studies, the availability of replication material for them and of replication studies: http://replication.uni-goettingen.de

It can help for research as well as for teaching replication to students. We taught seminars at several faculties internationally - also in Canada, at UofT - for which the information of this database was used. In the starting phase the focus was on some leading journals in economics, and we now cover more than 1800 empirical studies and 142 replications. Replication results can be published as replication working papers of the University of Göttingen's Center for Statistics.

Teaching and providing access to information will raise awareness for the need for replications, provide a basis for research about the reasons why replications so often fail and how this can be changed, and educate future generations of economists about how to make research replicable.

I would be very grateful if you could take a look at our website, give us feedback, register and vote which studies should be replicated – votes are anonymous. If you could also help us to spread the message about this project, this would be most appreciated."

I'm more than happy to spread the word, Jan. I've requested an account, and I'll definitely be getting involved with your project. This look like a great venture!

Friday, May 09, 2014

Economists and Methodology

Simon Wren-Lewis:

Economists and methodology: ...very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. ...
This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is ... a feeling that the methodology being used is unproblematic, and therefore requires little discussion.
I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science..., but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does. ...
... The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.
Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. ... Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. ...
Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use...

Monday, May 05, 2014

'Refocusing Economics Education'

Antonio Fatás (each of the four points below are explained in detail in the original post):

Refocusing economics education: Via Mark Thoma I read an interesting article about how the mainstream economics curriculum needs to be revamped (Wren-Lewis also has some nice thoughts on this issue).

I am sympathetic to some of the arguments made in those posts and the need for some serious rethinking of the way economics is taught but I would put the emphasis on slightly different arguments. First, I  am not sure the recent global crisis should be the main reason to change the economics curriculum. Yes, economists failed to predict many aspects of the crisis but my view is that it was not because of the lack of tools or understanding. We have enough models in economics that explain most of the phenomena that caused and propagated the global financial crisis. There are plenty of models where individuals are not rational, where financial markets are driven by bubbles, with multiple equilbria,... that one can use to understand the last decade. We do have all these tools but as economics teachers (and researchers) we need to choose which ones to focus on. And here is where we failed. And we did it before and during the crisis but we also did it earlier. Why aren't we focusing on the right models or methodology? Here is my list of mistakes we do in our teaching, which might also reflect on our research:

#1 Too much theory, not enough emphasis on explaining empirical phenomena. ...

#2 Too many counterintuitive results. Economists like to teach things that are surprising. ...

#3 The need for a unified theory. ...

#4 We teach what our audience wants to hear. ...

I also believe the sociology within the profession needs to change.

Thursday, March 27, 2014

'The Misuse of Theoretical Models in Finance and Economics'

 Stanford University's Paul Pfleiderer:

Chameleons: The Misuse of Theoretical Models in Finance and Economics, by Paul Pfleiderer, March 2014: Abstract In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy. I discuss how chameleons are created and nurtured by the mistaken notion that one should not judge a model by its assumptions, by the unfounded argument that models should have equal standing until definitive empirical tests are conducted, and by misplaced appeals to “as-if” arguments, mathematical elegance, subtlety, references to assumptions that are “standard in the literature,” and the need for tractability.

Sunday, March 23, 2014

On Greg Mankiw's 'Do No Harm'

A rebuttal to Greg Mankiw's claim that the government should not interfere in voluntary exchanges. This is from Rakesh Vohra at Theory of the Leisure Class:

Do No Harm & Minimum Wage: In the March 23rd edition of the NY Times Mankiw proposes a 'do no harm' test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test. ...

There is an immediate 'heart strings' argument against the test, because indentured servitude passes the 'do no harm' test. ... I want to focus instead on two other aspects of the 'do no harm' principle contained in the words 'voluntarily'and 'benefit'. What is voluntary and benefit compared to what? ...

When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this...

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.

[There is also an example based upon differences in patience that I left out.]

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

'The Counter-Factual & the Fed’s QE'

I tried to make this point in a recent column (it was about fiscal rather than monetary policy, but the same point applies), but I think Barry Ritholtz makes the point better and more succinctly:

Understanding Why You Think QE Didn't Work, by Barry Ritholtz: Maybe you have heard a line that goes something like this: The weak recovery is proof that the Federal Reserve’s program of asset purchases, otherwise known as quantitative easement, doesn't work.
If you were the one saying those words, you don't understand the counterfactual. ...
This flawed analytical paradigm has many manifestations, and not just in the investing world. They all rely on the same equation: If you do X, and there is no measurable change, X is therefore ineffective.
The problem with this “non-result result” is what would have occurred otherwise. Might “no change” be an improvement from what otherwise would have happened? No change, last time I checked, is better than a free-fall.
If you are testing a new medication to reduce tumors, you want to see what happened to the group that didn't get the test therapy. Maybe this control group experienced rapid tumor growth. Hence, a result where there is no increase in tumor mass in the group receiving the therapy would be considered a very positive outcome.
We run into the same issue with QE. ... Without that control group, we simply don't know. ...

Friday, February 21, 2014

'What Game Theory Means for Economists'

At MoneyWatch:

Explainer: What "game theory" means for economists, by Mark Thoma: Coming upon the term "game theory" this week, your first thought would likely be about the Winter Olympics in Sochi. But here we're going to discuss how game theory applies in economics, where it's widely used in topics far removed from the ski slopes and ice rinks where elite athletes compete. ...

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]