In light of comments such as this from Robet Waldmann (who doesn't get shrill with just anyone, so I'm honored to make his list), I think I should elaborate a bit more on my view of Old versus New Keynesian models:
Mark Thoma explains the very basics of New Keynesian economics and I am very rude. I came here clicking a link in a post where you indignantly deny that you are an old Keynesian. I ask what has been added by the very simple inter-temporal optimization?
It seems to me that New Keynesian macro consists (as here) in writing models with optimizing agents which behave the way old Keynesian models behave. I ask why not cut out the middle man?
The model as written has implications other than that there is something like an IS curve. As you note, the true expected value (that is rationally expected) of future GDP affects current GDP. Also the real interest rate affects the rate of growth of consumption.
The problem is that, to the extent new Keynessian models differ from old Keynesian models, the data are not kind to the new models. ...
The newness is all about intertemporal optimization without liquidity constraints. It clearly gives false implications. So the model is modified so that it acts just like an old Keynesian model. How is this a worthwhile activity ?
Notably, the micro foundations are not justified on the assumption that people really intertemporally optimize with rational expectations. ... There is no reason to believe that the definitely false assumptions that new Keynesians like to make are better than any other definitely false assumptions. ...
What, of any value, have macroeconomists added to Keynes? ...
As Robert notes, one big difference between the Old and New Keynesian models is the way in which expectations are treated. The older models do not incorporate expectations, e.g. expected future monetary and fiscal policy, in an acceptable way. When expectations of current and future events are unimportant, the distinction between the Old and New models is not that large. But when expectations do matter -- as I believe they often do -- then the older models can miss important feedback effects.
For example (which some will recognize as a version of the Lucas critique, something Robert refers to indirectly in a part of his post that I omitted), suppose that you take a survey with a hidden camera and discover that there are 10 cars per hour speeding on a given section of road. Thus, you figure, at $200 per ticket the city will make $2,000 per hour (gross) by stationing traffic police along that part of the road.
However, after stationing traffic police at the location, the actual number of tickets that are written is far short of projections. Why is that? It's because people will call/tweet/text their friends and family and warn them -- don't speed today, they are giving tickets. People who travel the route many times a day will notice the officers with radar guns and, if they avoid detection the first time -- or even, I suppose, if they don't -- the will be careful on subsequent passes. Even if people are mostly surprised the first day and revenues are near projections, on day 2 people will expect officers at those locations and be more careful. And if not on day 2, by day 3 they will have surely learned.
And that is the key. The policy works only so long as it is a surprise. If people know about the policy -- if they expect it in advance -- then they will be careful to avoid getting a ticket. If every single person expects the policy, if they know the officers will be there, then there shouldn't be any tickets at all if people behave rationally and there is no "stickiness" in the model that prevents them from taking evasive action.
It's no different with macroeconomic policy. If the government puts a policy in place, e.g. new taxes, then people will do their best to avoid having it harm them. If they can take cost effective actions to avoid the new taxes, then they will. A failure to account for this can lead to big mistakes in forecasts of the impact of new taxes on tax revenue in the same way that a failure to account for the fact that when people know that officers are watching they change their behavior causes revenue projections to be wrong.
Old Keynesian models do not account for these expectation effects satisfactorily, and that's one of the reasons I think the newer models provide a stronger framework to evaluate the effects of policy.
[This is evident in this post where the NK IS curve (which is really an Euler equation -- i.e. in essence a first order condition in a maximization problem) contains the term EtYt+1 while the standard IS curve does not.]
But let me be clear about what I mean by a New Keynesian model. For me it is nothing more than a dynamic, stochastic general equilibrium model with some type of friction tacked onto to it (or arising endogenously, which is more desirable though harder theoretically), and some sort of expectations/learning mechanism embedded within it.
I am not at all convinced, however, that the type of friction that is used in garden variety versions of the NK model -- the Calvo price stickiness mechanism -- is the correct type of friction to characterize the recession we have just experienced (nor do I have much faith in simulations of, say, monetary or fiscal policy that rely upon this assumption to generate policy effects). The type of transmission mechanism that is operable in this model, price stickiness that causes relative prices to go awry, which in turn causes resources to be misdirected, does not seem to me to capture the essence of the breakdown in financial intermediation at the heart of the financial collapse (though stickiness in housing prices and perhaps wages may help to explain the pace of the recovery, but even then it is nowhere near the full story). I think a friction/information problem/market failure of some sort is involved, but the connections between the real and financial sectors that is needed to explain what we have just experienced is not present in these models. There are versions of these models that take steps in this direction, and work is going on right now to try to solve this problem and connect the real economy to the financial sector in a meaningful way -- some of which is making inroads -- but I am just not convinced that the models we have right now properly capture the transmission mechanism for shocks of the type we have just experienced. So there is definitely more work to be done. But I don't think we solve the problem by going back to older constructions.
(However, as I've argued before, in the meantime there are times when the old fashioned IS-LM model delivers better answers than modern models, or at least serves as a better rough guide. This is, in part, because the older models were built to answer the kinds of questions we confront today, questions that arose out of the great Depression, while the NK model was built to explain milder fluctuations, the type we saw in the Great Moderation. These milder fluctuations may very well may be driven by price stickiness of the type characterized by Calvo-type models, but Calvo models don't do so well when confronted with a financial meltdown. But if you are going to take guidance from the older models it is essential that you understand the limitations of the model -- this should not to be done without a thorough knowledge of the pitfalls involved and where they can and cannot be avoided -- the kind of knowledge someone like Paul Krugman surely has at hand.)
I believe, then, that the use of dynamic, stochastic, general equilibrium structures that incorporate expectations in a defensible way is the way to go. Thus, there is no need to discard the set of tools that we have. However, though we have the tools, for the most part anyway, we failed to ask the right questions. This is partly a technical issue. We use representative agent models to set aside the difficult problems involved with aggregation across diverse agents, but models where there are connections between the real and financial sectors often require the representative agent structure to be set aside. But setting this aside means confronting the difficult technical problems directly, and this is an area where we are frantically developing tools right now so that we can overcome this limitation. But in the past it was easier to simply assume a single, representative agent and not deal with with it at all. Handling it in this was wasn't thought to be a big problem since our own overconfidence in ourselves and our abilities led us to believe that we had solved the problem of large, long-lived depressions driven by financial meltdowns. Why go to all the trouble of developing the technical apparatus to ask what if questions about financial meltdowns if there was little chance of this happening? It just wasn't worth the trouble -- then anyway -- now we (hopefully) know better.
Let me also address, briefly, Robert's concern about the rationality assumption. As someone in a department who specializes in learning models (and we are in the process of hiring a great senior faculty member to augment that expertise), I am not going to defend the rationality assumption as it is usually made in DSGE models. But that doesn't mean that plugging a better expectation formation mechanism into these models, along with the proper frictions and expectational feedback mechanisms, can't produce reasonable results.
There is much more to be said about all of this, but this is running fairly long already, so let me just close by noting that when I say I support the newer models over the older, I don't mean to say that the new models have gotten us anywhere near where we need to go. I think they are pointing in the right direction -- though I wouldn't mind if some theorists backed up, and then rebuilt things along a different path so that we'd have more alternatives to test against each other, and more chances to stumble toward better understanding f how the macroeconomy works -- but there is no doubt that there is considerable work yet to be done.