Category Archive for: Macroeconomics [Return to Main]

Thursday, September 11, 2014

'Trapped in the ''Dark Corners'''?

A small part of Brad DeLong's response to Olivier Blanchard. I posted a shortened version of Blanchard's argument a week or two ago:

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

And I responded:

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Here's part of Brad's response:

But… but… but… Macroeconomic policy and financial regulation are not set in such a way as to maintain a healthy distance from dark corners. We are still in a dark corner now. There is no sign of the 4% per year inflation target, the commitments to do what it takes via quantitative easing and rate guidance to attain it, or a fiscal policy that recognizes how the rules of the game are different for reserve currency printing sovereigns when r < n+g. Thus not only are we still in a dark corner, but there is every reason to believe that should we get out the sub-2% per year effective inflation targets of North Atlantic central banks and the inappropriate rhetoric and groupthink surrounding fiscal policy makes it highly likely that we will soon get back into yet another dark corner. Blanchard’s pragmatic answer is thus the most unpragmatic thing imaginable: the “if” test fails, and so the “then” part of the argument seems to me to be simply inoperative. Perhaps on another planet in which North Atlantic central banks and governments aggressively pursued 6% per year nominal GDP growth targets Blanchard’s answer would be “pragmatic”. But we are not on that planet, are we?

Moreover, even were we on Planet Pragmatic, it still seems to be wrong. Using current or any visible future DSGE models for forecasting and mainstream scenario planning makes no sense: the DSGE framework imposes restrictions on the allowable emergent properties of the aggregate time series that are routinely rejected at whatever level of frequentist statistical confidence that one cares to specify. The right road is that of Christopher Sims: that of forecasting and scenario planning using relatively instructured time-series methods that use rather than ignore the correlations in the recent historical data. And for policy evaluation? One should take the historical correlations and argue why reverse-causation and errors-in-variables lead them to underestimate or overestimate policy effects, and possibly get it right. One should not impose a structural DSGE model that identifies the effects of policies but certainly gets it wrong. Sims won that argument. Why do so few people recognize his victory?

Blanchard continues:

Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage…

For the second task, the question is: whose models of tail risk based on what traditions get to count in the tail risks discussion?

And missing is the third task: understanding what Paul Krugman calls the “Dark Age of macroeconomics”, that jahiliyyah that descended on so much of the economic research, economic policy analysis, and economic policymaking communities starting in the fall of 2007, and in which the center of gravity of our economic policymakers still dwell.

Sunday, August 31, 2014

'Where Danger Lurks'

Olivier Blanchard (a much shortened version of his arguments, the entire piece is worth reading):

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Tuesday, August 19, 2014

The Agent-Based Method

Rajiv Sethi:

The Agent-Based Method: It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.

So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 

Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 

---

The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required. 

Wednesday, August 13, 2014

'Unemployment Fluctuations are Mainly Driven by Aggregate Demand Shocks'

Do the facts have a Keynesian bias?:

Using product- and labour-market tightness to understand unemployment, by Pascal Michaillat and Emmanuel Saez, Vox EU: For the five years from December 2008 to November 2013, the US unemployment rate remained above 7%, peaking at 10% in October 2009. This period of high unemployment is not well understood. Macroeconomists have proposed a number of explanations for the extent and persistence of unemployment during the period, including:

  • High mismatch caused by major shocks to the financial and housing sectors,
  • Low search effort from unemployed workers triggered by long extensions of unemployment insurance benefits, and
  • Low aggregate demand caused by a sudden need to repay debts or pessimism, but no consensus has been reached.

In our opinion this lack of consensus is due to a gap in macroeconomic theory: we do not have a model that is rich enough to account for the many factors driving unemployment – including aggregate demand – and simple enough to lend itself to pencil-and-paper analysis. ...

In Michaillat and Saez (2014), we develop a new model of unemployment fluctuations to inspect the mechanisms behind unemployment fluctuations. The model can be seen as an equilibrium version of the Barro-Grossman model. It retains the architecture of the Barro-Grossman model but replaces the disequilibrium framework on the product and labour markets with an equilibrium matching framework. ...

Through the lens of our simple model, the empirical evidence suggests that price and real wage are somewhat rigid, and that unemployment fluctuations are mainly driven by aggregate demand shocks.

Tuesday, August 12, 2014

Why Do Macroeconomists Disagree?

I have a new column:

Why Do Macroeconomists Disagree?, by Mark Thoma, The Fiscal Times: On August 9, 2007, the French Bank BNP Paribus halted redemptions to three investment funds active in US mortgage markets due to severe liquidity problems, an event that many mark as the beginning of the financial crisis. Now, just over seven years later, economists still can’t agree on what caused the crisis, why it was so severe, and why the recovery has been so slow. We can’t even agree on the extent to which modern macroeconomic models failed, or if they failed at all.
The lack of a consensus within the profession on the economics of the Great Recession, one of the most significant economic events in recent memory, provides a window into the state of macroeconomics as a science. ...

Monday, August 11, 2014

'On Macroeconomic Forecasting'

Simon Wren-Lewis:

...The rather boring truth is that it is entirely predictable that forecasters will miss major recessions, just as it is equally predictable that each time this happens we get hundreds of articles written asking what has gone wrong with macro forecasting. The answer is always the same - nothing. Macroeconomic model based forecasts are always bad, but probably no worse than intelligent guesses.

More here.

'Inflation in the Great Recession and New Keynesian Models'

From the NY Fed's Liberty Street Economics:

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc Giannoni, Raiden Hasegawa, and Frank Schorfheide: Since the financial crisis of 2007-08 and the Great Recession, many commentators have been baffled by the “missing deflation” in the face of a large and persistent amount of slack in the economy. Some prominent academics have argued that existing models cannot properly account for the evolution of inflation during and following the crisis. For example, in his American Economic Association presidential address, Robert E. Hall called for a fundamental reconsideration of Phillips curve models and their modern incarnation—so-called dynamic stochastic general equilibrium (DSGE) models—in which inflation depends on a measure of slack in economic activity. The argument is that such theories should have predicted more and more disinflation as long as the unemployment rate remained above a natural rate of, say, 6 percent. Since inflation declined somewhat in 2009, and then remained positive, Hall concludes that such theories based on a concept of slack must be wrong.        
In an NBER working paper and a New York Fed staff report (forthcoming in the American Economic Journal: Macroeconomics), we use a standard New Keynesian DSGE model with financial frictions to explain the behavior of output and inflation since the crisis. This model was estimated using data up to 2008. We find that following the increase in financial stress in 2008, the model successfully predicts not only the sharp contraction in economic activity, but also only a modest decline in inflation. ...

Thursday, July 31, 2014

'What Are Academics Good For?'

Simon Wren-Lewis

What are academics good for?: A survey of US academic economists, which found that 36 thought the Obama fiscal stimulus reduced unemployment and only one thought otherwise, led to this cri de coeur from Paul Krugman. What is the point in having academic research if it is ignored, he asked? At the same time I was involved in a conversation on twitter, where the person I was tweeting with asked ... why should we take any more notice of what academic economists say about economics than, well, City economists or economic journalists?
Here is a very good example of why. ...

Sunday, July 27, 2014

'Monetarist, Keynesian, and Minskyite Depressions Once Again'

Brad DeLong:

I have said this before. But I seem to need to say it again…
The very intelligent and thoughtful David Beckworth, Simon Wren-Lewis, and Nick Rowe are agreeing on New Keynesian-Market Monetarist monetary-fiscal convergence. Underpinning all of their analyses there seems to me to be the assumption that all aggregate demand shortfalls spring from the same deep market failures. And I think that that is wrong. ...[continue]...

Wednesday, July 23, 2014

'Wall Street Skips Economics Class'

The discussion continues:

Wall Street Skips Economics Class, by Noah Smith: If you care at all about what academic macroeconomists are cooking up (or if you do any macro investing), you might want to check out the latest economics blog discussion about the big change that happened in the late '70s and early '80s. Here’s a post by the University of Chicago economist John Cochrane, and here’s one by Oxford’s Simon Wren-Lewis that includes links to most of the other contributions.
In case you don’t know the background, here’s the short version...

Friday, July 18, 2014

'Further Thoughts on Phillips Curves'

Simon Wren-Lewis:

Further thoughts on Phillips curves: In a post from a few days ago I looked at some recent evidence on Phillips curves, treating the Great Recession as a test case. I cast the discussion as a debate between rational and adaptive expectations. Neither is likely to be 100% right of course, but I suggested the evidence implied rational expectations were more right than adaptive. In this post I want to relate this to some other people’s work and discussion. (See also this post from Mark Thoma.) ...
The first issue is why look at just half a dozen years, in only a few countries. As I noted in the original post, when looking at CPI inflation there are many short term factors that may mislead. Another reason for excluding European countries which I did not mention is the impact of austerity driven higher VAT rates (and other similar taxes or administered prices), nicely documented by Klitgaard and Peck. Surely all this ‘noise’ is an excellent reason to look over a much longer time horizon?
One answer is given in this recent JEL paper by Mavroeidis, Plagborg-Møller and Stock. As Plagborg-Moller notes in an email to Mark Thoma: “Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature….traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.” This had been my reading based on work I’d seen.
This is often going to be the case with time series econometrics, particularly when key variables appear in the form of expectations. Faced with this, what economists often look for is some decisive and hopefully large event, where all the issues involving specification uncertainty can be sidelined or become second order. The Great Recession, for countries that did not suffer a second recession, might be just such an event. In earlier, milder recessions it was also much less clear what the monetary authority’s inflation target was (if it had one at all), and how credible it was. ...

I certainly agree with the claim that a "decisive and hopefully large event" is needed to empirically test econometric models since I've made the same point many times in the past. For example, "...the ability to choose one model over the other is not quite as hopeless as I’ve implied. New data and recent events like the Great Recession push these models into unchartered territory and provide a way to assess which model provides better predictions. However, because of our reliance on historical data this is a slow process – we have to wait for data to accumulate – and there’s no guarantee that once we are finally able to pit one model against the other we will be able to crown a winner. Both models could fail..."

Anyway...he goes on to discuss "How does what I did relate to recent discussions by Paul Krugman?," and concludes with:

My interpretation suggests that the New Keynesian Phillips curve is a more sensible place to start from than the adaptive expectations Friedman/Phelps version. As this is the view implicitly taken by most mainstream academic macroeconomics, but using a methodology that does not ensure congruence with the data, I think it is useful to point out when the mainstream does have empirical support. ...

Monday, July 14, 2014

Is There a Phillips Curve? If So, Which One?

One place that Paul Krugman and Chris House disagree is on the Phillips curve. Krugman (responding to a post by House) says:

New Keynesians do stuff like one-period-ahead price setting or Calvo pricing, in which prices are revised randomly. Practicing Keynesians have tended to rely on “accelerationist” Phillips curves in which unemployment determined the rate of change rather than the level of inflation.
So what has happened since 2008 is that both of these approaches have been found wanting: inflation has dropped, but stayed positive despite high unemployment. What the data actually look like is an old-fashioned non-expectations Phillips curve. And there are a couple of popular stories about why: downward wage rigidity even in the long run, anchored expectations.

House responds:

What the data actually look like is an old-fashioned non-expectations Phillips curve. 
OK, here is where we disagree. Certainly this is not true for the data overall. It seems like Paul is thinking that the system governing the relationship between inflation and output changes between something with essentially a vertical slope (a “Classical Phillips curve”) and a nearly flat slope (a “Keynesian Phillips Curve”). I doubt that this will fit the data particularly well and it would still seem to open the door to a large role for “supply shocks” – shocks that neither Paul nor I think play a big role in business cycles.

Simon Wren-Lewis also has something to say about this in his post from earlier today, Has the Great Recession killed the traditional Phillips Curve?:

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.
After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.
Which better describes the data? ...
[W]e can see why some ... studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.

I don't have much to add, except to say that this is an empirical question that will be difficult to resolve empirically (because there are so many different ways to estimate a Phillips curve, and different specifications give different answers, e.g. which measure of prices to use, which measure of aggregate activity to use, what time period to use and how to handle structural and policy breaks during the period that is chosen, how should natural rates be extracted from the data, how to handle non-stationarities, if we measure aggregate activity with the unemployment rate, do we exclude the long-term unemployed as recent research suggests, how many lags should be included, etc., etc.?).

Sunday, July 13, 2014

New Classical Economics as Modeling Strategy

Judy Klein emails a response to a recent post of mine based upon Simon Wren Lewis's post “Rereading Lucas and Sargent 1979”:

Lucas and Sargent’s, “After Keynesian Macroeconomics,” was presented at the 1978 Boston Federal Reserve Conference on “After the Phillips Curve: Persistence of High Inflation and High Unemployment.” Although the title of the conference dealt with stagflation, the rational expectations theorists saw themselves countering one technical revolution with another.

The Keynesian Revolution was, in the form in which it succeeded in the United States, a revolution in method. This was not Keynes’s intent, nor is it the view of all of his most eminent followers. Yet if one does not view the revolution in this way, it is impossible to account for some of its most important features: the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy. [Lucas and Sargent, 1979, pg. 50]

The Lucas papers at the Economists' Papers Project at the University of Duke reveal the preliminary planning for the 1978 presentation. Lucas and Sargent decided that it would be a “rhetorical piece… to convince others that the old-fashioned macro game is up…in a way which makes it clear that the difficulties are fatal”; it’s theme would be the “death of macroeconomics” and the desirability of replacing it with an “Aggregative Economics” whose foundation was “equilibrium theory.” (Lucas letter to Sargent February 9, 1978). Their 1978 presentation was replete, as their discussant Bob Solow pointed out, with the planned rhetorical barbs against Keynesian economics of “wildly incorrect," "fundamentally flawed," "wreckage," "failure," "fatal," "of no value," "dire implications," "failure on a grand scale," "spectacular recent failure," "no hope." The empirical backdrop to Lucas and Sargent’s death decree on Keynesian economics was evident in the subtitle of the conference: “Persistence of High Inflation and High Unemployment.”

Although they seized the opportunity to comment on policy failure and the high misery-index economy, Lucas and Sargent shifted the macroeconomic court of judgment from the economy to microeconomics. They fought a technical battle over the types of restrictions used by modelers to identify their structural models. Identification-rendering restrictions were essential to making both the Keynesian and rational expectations models “work” in policy applications, but Lucas and Sargent defined the ultimate terms of success not with regard to a model’s capacity for empirical explanation or achievement of a desirable policy outcome, but rather with regard to the model’s capacity to incorporate optimization and equilibrium – to aggregate consistently rational individuals and cleared markets.

In the macroeconomic history written by the victors, the Keynesian revolution and the rational expectations revolution were both technical revolutions, and one could delineate the sides of the battle line in the second revolution by the nature of the restricting assumptions that enabled the model identification that licensed policy prescription. The rational expectations revolution, however, was also a revolution in the prime referential framework for judging macroeconomic model fitness for going forth and multiplying; the consistency of the assumptions – the equation restrictions - with optimizing microeconomics and mathematical statistical theory, rather than end uses of explaining the economy and empirical statistics, constituted the new paramount selection criteria.

Some of the new classical macroeconomists have been explicit about the narrowness of their revolution. For example, Sargent noted in 2008, “While rational expectations is often thought of as a school of economic thought, it is better regarded as a ubiquitous modeling technique used widely throughout economics.” In an interview with Arjo Klamer in 1983, Robert Townsend asserted that “New classical economics means a modeling strategy.”

It is no coincidence, however, that in this modeling narrative of economic equilibrium crafted in the Cold War era, Adam Smith’s invisible hand morphs into a welfare-maximizing “hypothetical ‘benevolent social planner’” (Lucas, Prescott, Stokey 1989) enforcing a “communism of models” (Sargent 2007) and decreeing to individual agents the mutually consistent rules of action that become the equilibrating driving force. Indeed, a long-term Office of Naval Research grant for “Planning & Control of Industrial Operations” awarded to the Carnegie Institutes of Technology’s Graduate School of Industrial Administration had funded Herbert Simon’s articulation of his certainty equivalence theorem and John Muth’s study of rational expectations. It is ironic that a decade-long government planning contract employing Carnegie professors and graduate students underwrote the two key modeling strategies for the Nobel-prize winning demonstration that the rationality of consumers renders government intervention to increase employment unnecessary and harmful.

Friday, July 11, 2014

'Rereading Lucas and Sargent 1979'

Simon Wren-Lewis with a nice follow-up to an earlier discussion:

Rereading Lucas and Sargent 1979: Mainly for macroeconomists and those interested in macroeconomic thought
Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.
What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.
In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation...[continue]...

Sunday, July 06, 2014

'Slump Stories and the Inflation Test'

Does evidence matter?:

Slump Stories and the Inflation Test: Noah Smith has another post on John Cochrane’s anti-Keynesian screed... All the anti-Keynesian stories (except “uncertainty”, which as Nick Rowe points out is actually a Keynesian story but doesn’t know it) are supply-side stories; Cochrane even puts scare quotes around the word “demand”. Basically, they’re claiming that unemployment benefits, or Obamacare, or regulations, or something, are reducing the willingness of workers and firms to produce stuff.
How would you test this? In a supply-constrained economy, the kind of monetary policy we’ve had, with the Fed quintupling the size of its balance sheet over a short period of time, would be highly inflationary. Indeed, just about everyone on the right has been predicting runaway inflation year after year.
Meanwhile, if you had a demand-side view, and considered the implications of the zero lower bound, you declared that nothing of the sort would happen...
It seems to me that the failure of the inflation predicted by anti-Keynesians to appear — and the fact that this failure was predicted by Keynesian models — is a further big reason not to take what these people are saying seriously.

In a "supply-constrained economy" the price of inputs like labor should also rise, but that hasn't happened either.

Friday, July 04, 2014

Responses to John Cochrane's Attack on New Keynesian Models

The opening quote from chapter 2 of Mankiw's intermediate macro textbook:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to fit facts. — Sherlock Holmes

Or, instead of "before one has data," change it to "It is a capital mistake to theorize without knowledge of the data" and it's a pretty good summary of Paul Krugman's response to John Cochrane:

Macro Debates and the Relevance of Intellectual History: One of the interesting things about the ongoing economic crisis is the way it has demonstrated the importance of historical knowledge. ... But it’s not just economic history that turns out to be extremely relevant; intellectual history — the history of economic thought — turns out to be relevant too.
Consider, in particular, the recent to-and-fro about stagflation and the rise of new classical macroeconomics. You might think that this is just economist navel-gazing; but you’d be wrong.
To see why, consider John Cochrane’s latest. ... Cochrane’s current argument ... effectively depends on the notion that there must have been very good reasons for the rejection of Keynesianism, and that harkening back to old ideas must involve some kind of intellectual regression. And that’s where it’s important — as David Glasner notes — to understand what really happened in the 70s.
The point is that the new classical revolution in macroeconomics was not a classic scientific revolution, in which an old theory failed crucial empirical tests and was supplanted by a new theory that did better. The rejection of Keynes was driven by a quest for purity, not an inability to explain the data — and when the new models clearly failed the test of experience, the new classicals just dug in deeper. They didn’t back down even when people like Chris Sims (pdf), using the very kinds of time-series methods they introduced, found that they strongly pointed to a demand-side model of economic fluctuations.
And critiques like Cochrane’s continue to show a curious lack of interest in evidence. ... In short, you have a much better sense of what’s really going on here, and which ideas remain relevant, if you know about the unhappy history of macroeconomic thought.

Nick Rowe:

Insufficient Demand vs?? Uncertainty: ...John Cochrane says: "John Taylor, Stanford's Nick Bloom and Chicago Booth's Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago's Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth." ...
Increased political uncertainty would reduce aggregate demand. Plus, positive feedback processes could amplify that initial reduction in aggregate demand. Even those who were not directly affected by that increased political uncertainty would reduce their own willingness to hire lend or invest because of that initial reduction in aggregate demand, plus their own uncertainty about aggregate demand. So the average person or firm might respond to a survey by saying that insufficient demand was the problem in their particular case, and not the political uncertainty which caused it.
But the demand-side problem could still be prevented by an appropriate monetary policy response. Sure, there would be supply-side effects too. And it would be very hard empirically to estimate the relative magnitudes of those demand-side vs supply-side effects. ...
So it's not just an either/or thing. Nor is it even a bit-of-one-plus-bit-of-the-other thing. Increased political uncertainty can cause a recession via its effect on demand. Unless monetary policy responds appropriately. (And that, of course, would mean targeting NGDP, because inflation targeting doesn't work when supply-side shocks cause adverse shifts in the Short Run Phillips Curve.)

On whether supply or demand shocks are the source of aggregate fluctuations, Blanchard and Quah (1989), Shapiro and Watson (1988), and others had it right (though the identifying restriction that aggregate demand shocks do not have permanent effects seems to be undermined by the Great Recession ). It's not an eithor/or question, it's a matter of figuring out how much of the variation in GDP/employment is due to supply shocks, and how much is due to demand shocks. And as Nick Rowe points out with his example, sorting between these two causes can be very difficult -- identifying which type of shock is driving changes in aggregate variables is not at all easy and depends upon particular assumptions. Nevertheless, my reading of the empirical evidence is much like Krugman's. Overall, across all these papers, it is demand shocks that play the most prominent role. Supply shocks do matter, but not nearly so much as demand shocks when it comes to explaining aggregate fluctuations.

Saturday, June 28, 2014

The Rise and Fall of the New Classical Model

Simon Wren-Lewis (my comments are at the end):

Understanding the New Classical revolution: In the account of the history of macroeconomic thought I gave here, the New Classical counter revolution was both methodological and ideological in nature. It was successful, I suggested, because too many economists were unhappy with the gulf between the methodology used in much of microeconomics, and the methodology of macroeconomics at the time.
There is a much simpler reading. Just as the original Keynesian revolution was caused by massive empirical failure (the Great Depression), the New Classical revolution was caused by the Keynesian failure of the 1970s: stagflation. An example of this reading is in this piece by the philosopher Alex Rosenberg (HT Diane Coyle). He writes: “Back then it was the New Classical macrotheory that gave the right answers and explained what the matter with the Keynesian models was.”
I just do not think that is right. Stagflation is very easily explained: you just need an ‘accelerationist’ Phillips curve (i.e. where the coefficient on expected inflation is one), plus a period in which monetary policymakers systematically underestimate the natural rate of unemployment. You do not need rational expectations, or any of the other innovations introduced by New Classical economists.
No doubt the inflation of the 1970s made the macroeconomic status quo unattractive. But I do not think the basic appeal of New Classical ideas lay in their better predictive ability. The attraction of rational expectations was not that it explained actual expectations data better than some form of adaptive scheme. Instead it just seemed more consistent with the general idea of rationality that economists used all the time. Ricardian Equivalence was not successful because the data revealed that tax cuts had no impact on consumption - in fact study after study have shown that tax cuts do have a significant impact on consumption.
Stagflation did not kill IS-LM. In fact, because empirical validity was so central to the methodology of macroeconomics at the time, it adapted to stagflation very quickly. This gave a boost to the policy of monetarism, but this used the same IS-LM framework. If you want to find the decisive event that led to New Classical economists winning their counterrevolution, it was the theoretical realisation that if expectations were rational, but inflation was described by an accelerationist Phillips curve with expectations about current inflation on the right hand side, then deviations from the natural rate had to be random. The fatal flaw in the Keynesian/Monetarist theory of the 1970s was theoretical rather than empirical.

I agree with this, so let me add to it by talking about what led to the end of the New Classical revolution (see here for a discussion of the properties of New Classical, New Keynesian, and Real Business Cycle Models). The biggest factor was empirical validity. Although some versions of the New Classical model allowed monetary non-neutrality (e.g. King 1982, JPE), when three factors are present, continuous market clearing, rational expectations, and the natural rate hypothesis, monetary neutrality is generally present in these models. Initially work from people like Barrow found strong support for the prediction of these models that only unanticipated changes in monetary policy can affect real variables like output, but subsequent work and eventually the weight of the evidence pointed in the other direction. Both expected and unexpected changes in the money supply appeared to matter in contrast to a key prediction of the New Classical framework.

A second factor that worked against New Classical models is that they had difficulty explaining both the duration and magnitude of actual business cycles. If the reaction to an unexpected policy shock was focused in a single period, the magnitude could be matched, but not the duration. If the shock was spread over 3-5 years to match the duration, the magnitude of cycles could not be matched. Movements in macroeconomic variables arising from informational errors (unexpected policy shocks) did not have enough "power" to capture both aspects of actual business cycles.

The other factor that worked against these models was that information problems were a key factor in generating swings in GDP and employment, and these variations were costly in aggregate. Yet no markets for information appeared to resolve this problem. For those who believe in the power of markets, and many proponents of New Classical models were also market fundamentalists, the lack of markets for information was a problem.

The New Classical model had displaced the Keynesian model for the reasons highlighted above, but the failure of the New Classical model left the door open for the New Keynesian model to emerge (it appeared to be more consistent with the empirical evidence on the effects of changes in the money supply, and in other areas as well, e.g. the correlation between productivity and economic activity).

But while the New Classical revolution was relatively short-lived as macro models go, it left two important legacies, rational expectations and microfoundations (as well as better knowledge about how non-neutralities might arise, in essence the New Keynesian model drops continuous market clearing through the assumption of short-run price rigidities, and about how to model information sets). Rightly or wrongly, all subsequent models had to have these two elements present within them (RE and microfoundaions), or they would be dismissed.

Thursday, June 26, 2014

Why DSGEs Crash During Crises

David Hendry and Grayham Mizon with an important point about DSGE models:

Why DSGEs crash during crises, by David F. Hendry and Grayham E. Mizon: Many central banks rely on dynamic stochastic general equilibrium models – known as DSGEs to cognoscenti. This column – which is more technical than most Vox columns – argues that the models’ mathematical basis fails when crises shift the underlying distributions of shocks. Specifically, the linchpin ‘law of iterated expectations’ fails, so economic analyses involving conditional expectations and inter-temporal derivations also fail. Like a fire station that automatically burns down whenever a big fire starts, DSGEs become unreliable when they are most needed.

Here's the introduction:

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.
Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.
The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models. ...[continue]...

Update: [nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances.

Tuesday, June 24, 2014

'Was the Neoclassical Synthesis Unstable?'

The last paragraph from a much longer argument by Simon Wren-Lewis:

Was the neoclassical synthesis unstable?: ... Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

Friday, May 09, 2014

Economists and Methodology

Simon Wren-Lewis:

Economists and methodology: ...very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. ...
This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is ... a feeling that the methodology being used is unproblematic, and therefore requires little discussion.
I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science..., but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does. ...
... The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.
Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. ... Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. ...
Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use...

Monday, May 05, 2014

'Refocusing Economics Education'

Antonio Fatás (each of the four points below are explained in detail in the original post):

Refocusing economics education: Via Mark Thoma I read an interesting article about how the mainstream economics curriculum needs to be revamped (Wren-Lewis also has some nice thoughts on this issue).

I am sympathetic to some of the arguments made in those posts and the need for some serious rethinking of the way economics is taught but I would put the emphasis on slightly different arguments. First, I  am not sure the recent global crisis should be the main reason to change the economics curriculum. Yes, economists failed to predict many aspects of the crisis but my view is that it was not because of the lack of tools or understanding. We have enough models in economics that explain most of the phenomena that caused and propagated the global financial crisis. There are plenty of models where individuals are not rational, where financial markets are driven by bubbles, with multiple equilbria,... that one can use to understand the last decade. We do have all these tools but as economics teachers (and researchers) we need to choose which ones to focus on. And here is where we failed. And we did it before and during the crisis but we also did it earlier. Why aren't we focusing on the right models or methodology? Here is my list of mistakes we do in our teaching, which might also reflect on our research:

#1 Too much theory, not enough emphasis on explaining empirical phenomena. ...

#2 Too many counterintuitive results. Economists like to teach things that are surprising. ...

#3 The need for a unified theory. ...

#4 We teach what our audience wants to hear. ...

I also believe the sociology within the profession needs to change.

Friday, May 02, 2014

Paul Krugman: Why Economics Failed

Why didn't fiscal policy makers listen to economists?:

Why Economics Failed, by Paul Krugman, Commentary, NY Times: On Wednesday, I wrapped up the class I’ve been teaching..: “The Great Recession: Causes and Consequences.” ...I found myself turning at the end to an agonizing question: Why, at the moment it was most needed and could have done the most good, did economics fail?
I don’t mean that economics was useless to policy makers. ... While ... few economists saw the crisis coming..., since the fall of Lehman Brothers, basic textbook macroeconomics has performed very well. ...
In what sense did economics work well? Economists who took their own textbooks seriously quickly diagnosed the nature of our economic malaise: We were suffering from inadequate demand ... and a depressed economy. ...
And the diagnosis ... had clear policy implications: ...this was no time to worry about budget deficits and cut spending... We needed more government spending, not less, to fill the hole left by inadequate private demand. But... Since 2010, we’ve seen a sharp decline in discretionary spending and an unprecedented decline in budget deficits, and the result has been anemic growth and long-term unemployment on a scale not seen since the 1930s.
So why didn’t we use the economic knowledge we had?
One answer is that most people find the logic of policy in a depressed economy counterintuitive. ... And even supposedly well-informed people balk at the notion that simple lack of demand can wreak so much havoc. Surely, they insist, we must have deep structural problems, like a work force that lacks the right skills; that sounds serious and wise, even though all the evidence says that it’s completely untrue.
Meanwhile, powerful political factions ... whose real goal is dismantling the social safety net have found promoting deficit panic an effective way to push their agenda. And such people have been aided and abetted by what I’ve come to think of as the trahison des nerds — the willingness of some economists to come up with analyses that tell powerful people what they want to hear, whether it’s that slashing government spending is actually expansionary, because of confidence, or that government debt somehow has dire effects on economic growth even if interest rates stay low.
Whatever the reasons basic economics got tossed aside, the result has been tragic. ... We have, all along, had the knowledge and the tools to restore full employment. But policy makers just keep finding reasons not to do the right thing.

Tuesday, April 08, 2014

A Model of Secular Stagnation

Gauti Eggertson and Neil Mehotra have an interesting new paper:

A Model of Secular Stagnation, by Gauti Eggertsson and Neil Mehrotra: 1 Introduction During the closing phase of the Great Depression in 1938, the President of the American Economic Association, Alvin Hansen, delivered a disturbing message in his Presidential Address to the Association (see Hansen ( 1939 )). He suggested that the Great Depression might just be the start of a new era of ongoing unemployment and economic stagnation without any natural force towards full employment. This idea was termed the ”secular stagnation” hypothesis. One of the main driving forces of secular stagnation, according to Hansen, was a decline in the population birth rate and an oversupply of savings that was suppressing aggregate demand. Soon after Hansen’s address, the Second World War led to a massive increase in government spending effectively end- ing any concern of insufficient demand. Moreover, the baby boom following WWII drastically changed the population dynamics in the US, thus effectively erasing the problem of excess sav- ings of an aging population that was of principal importance in his secular stagnation hypothesis.
Recently Hansen’s secular stagnation hypothesis has gained increased attention. One obvious motivation is the Japanese malaise that has by now lasted two decades and has many of the same symptoms as the U.S. Great Depression - namely dwindling population growth, a nominal interest rate at zero, and subpar GDP growth. Another reason for renewed interest is that even if the financial panic of 2008 was contained, growth remains weak in the United States and unemployment high. Most prominently, Lawrence Summers raised the prospect that the crisis of 2008 may have ushered in the beginning of secular stagnation in the United States in much the same way as suggested by Alvin Hansen in 1938. Summers suggests that this episode of low demand may even have started well before 2008 but was masked by the housing bubble before the onset of the crisis of 2008. In Summers’ words, we may have found ourselves in a situation in which the natural rate of interest - the short-term real interest rate consistent with full employment - is permanently negative (see Summers ( 2013 )). And this, according to Summers, has profound implications for the conduct of monetary, fiscal and financial stability policy today.
Despite the prominence of Summers’ discussion of the secular stagnation hypothesis and a flurry of commentary that followed it (see e.g. Krugman ( 2013 ), Taylor ( 2014 ), Delong ( 2014 ) for a few examples), there has not, to the best of our knowledge, been any attempt to formally model this idea, i.e., to write down an explicit model in which unemployment is high for an indefinite amount of time due to a permanent drop in the natural rate of interest. The goal of this paper is to fill this gap. ...[read more]...

In the abstract, they note the policy prescriptions for secular stagnation:

In contrast to earlier work on deleveraging, our model does not feature a strong self-correcting force back to full employment in the long-run, absent policy actions. Successful policy actions include, among others, a permanent increase in inflation and a permanent increase in government spending. We also establish conditions under which an income redistribution can increase demand. Policies such as committing to keep nominal interest rates low or temporary government spending, however, are less powerful than in models with temporary slumps. Our model sheds light on the long persistence of the Japanese crisis, the Great Depression, and the slow recovery out of the Great Recession.

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Friday, February 14, 2014

'Are New Keynesian DSGE Models a Faustian Bargain?'

Simon Wren-Lewis:

 Are New Keynesian DSGE models a Faustian bargain?: Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events. Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build? ...

Wednesday, February 12, 2014

'Is Increased Price Flexibility Stabilizing? Redux'

I need to read this:

Is Increased Price Flexibility Stabilizing?, by Redux Saroj Bhattarai, Gauti Eggertsson, and Raphael Schoenle, NBER Working Paper No. 19886 February 2014 [Open Link]: Abstract We study the implications of increased price flexibility on output volatility. In a simple DSGE model, we show analytically that more flexible prices always amplify output volatility for supply shocks and also amplify output volatility for demand shocks if monetary policy does not respond strongly to inflation. More flexible prices often reduce welfare, even under optimal monetary policy if full efficiency cannot be attained. We estimate a medium-scale DSGE model using post-WWII U.S. data. In a counterfactual experiment we find that if prices and wages are fully flexible, the standard deviation of annualized output growth more than doubles.

Thursday, February 06, 2014

'How the New Classicals Drank the Austrians' Milkshake'

In a tweet, Roger Farmer says "This is a very good summary of Austrian vs classical Econ":

How the New Classicals drank the Austrians' milkshake: The "Austrian School of Economics" is still a name that is lovingly invoked by goldbugs, Zero Hedgies, Ron Paulians, and various online rightists. But as a program of scholarship it seems mostly dead. There is a group of "Austrians" at George Mason and NYU trying to revive the school by evolving it in the direction of mainstream econ, and then there is the Mises Institute, which contents itself with bathing in the fading glow of the works of the Old Masters. But in the main, "Austrian economics" is an ex-thing. It seems to me that the Austrian School's demise came not because its ideas were rejected and marginalized, but because most of them were co-opted by mainstream macroeconomics. The "New Classical" research program of Robert Lucas and Ed Prescott shares just enough similarities with the Austrian school to basically steal all their thunder. The main points being...

Wednesday, January 29, 2014

'No, Micro is not the "Good" Economics'

Greg Ip at The Economist:

No, micro is not the "good" economics: If asked to compile a list of economists’ mistakes over the last decade, I would not know where to start. Somewhere near the top would be failure to predict the global financial crisis. Even higher on the list would be failure to agree, five years later, on its cause. Is this fair? Not according to Noah Smith: these, he says, were not errors of economics but of macroeconomics. Microeconomics is the good economics, where economists by and large agree, conduct controlled experiments that confirm or modify established theory and lead to all sorts of welfare-enhancing outcomes.
To which I respond with two words: minimum wage..., ask any two economists – macro, micro, whatever – whether raising the minimum wage will reduce employment for the low skilled, and odds are you will get two answers. Sometimes more. (By contrast, ask them if raising interest rates will reduce output within a year or two, and almost all – that is, excepting real-business cycle purists – will say yes.)
Are there reasons a higher minimum wage will not have the textbook effect? Of course. ... But microeconomists are kidding themselves if they think this plethora of plausible explanations makes their branch of economics any more scientific or respectable than standard macroeconomics. ...

[There's quite a bit more in the original.]

Saturday, January 25, 2014

'Is Macro Giving Economics a Bad Rap?'

Chris House defends macro:

Is Macro Giving Economics a Bad Rap?: Noah Smith really has it in for macroeconomists. He has recently written an article in The Week in which he claims that macro is one of the weaker fields in economics...

I think the opposite is true. Macro is one of the stronger fields, if not the strongest ... Macro is quite productive and overall quite healthy. There are several distinguishing features of macroeconomics which set it apart from many other areas in economics. In my assessment, along most of these dimensions, macro comes out looking quite good.

First, macroeconomists are constantly comparing models to data. ... Holding theories up to the data is a scary and humiliating step but it is a necessary step if economic science is to make progress. Judged on this basis, macro is to be commended...

Second, in macroeconomics, there is a constant push to quantify theories. That is, there is always an effort to attach meaningful parameter values to the models. You can have any theory you want but at the end of the day, you are interested not only in idea itself, but also in the magnitude of the effects. This is again one of the ways in which macro is quite unlike other fields.

Third, when the models fail (and they always fail eventually), the response of macroeconomists isn’t to simply abandon the model, but rather they highlight the nature of the failure.  ...

Lastly, unlike many other fields, macroeconomists need to have a wide array of skills and familiarity with many sub-fields of economics. As a group, macroeconomists have knowledge of a wide range of analytical techniques, probably better knowledge of history, and greater familiarity and appreciation of economic institutions than the average economist.

In his opening remarks, Noah concedes that macro is “the glamor division of econ”. He’s right. What he doesn’t tell you is that the glamour division is actually doing pretty well. ...

Saturday, January 18, 2014

'Paul Krugman & the Nature of Economics'

Chris Dillow:

Paul Krugman & the nature of economics: Paul Krugman is being accused of hypocrisy for calling for an extension of unemployment benefits when one of his textbooks says "Generous unemployment benefits can increase both structural and frictional unemployment." I think he can be rescued from this charge, if we recognize that economics is not like (some conceptions of) the natural sciences, in that its theories are not universally applicable but rather of only local and temporal validity.
What I mean is that "textbook Krugman" is right in normal times when aggregate demand is highish. In such circumstances, giving people an incentive to find work through lower unemployment benefits can reduce frictional unemployment (the coexistence of vacancies and joblessness) and so increase output and reduce inflation.
But these might well not be normal times. It could well be be that demand for labour is unusually weak; low wage inflation and employment-population ratios suggest as much. In this world, the priority is not so much to reduce frictional unemployment as to reduce "Keynesian unemployment". And increased unemployment benefits - insofar as they are a fiscal expansion - might do this. When "columnist Krugman" says that "enhanced [unemployment insurance] actually creates jobs when the economy is depressed", the emphasis must be upon the last five words.
Indeed, incentivizing people to find work when it is not (so much) available might be worse than pointless. Cutting unemployment benefits might incentivize people to turn to crime rather than legitimate work.
So, it could be that "columnist Krugman" and "textbook Krugman" are both right, but they are describing different states of the world - and different facts require different models...

Thursday, January 02, 2014

'Tribalism, Biology, and Macroeconomics'

Paul Krugman:

Tribalism, Biology, and Macroeconomics: ...Pew has a new report about changing views on evolution. The big takeaway is that a plurality of self-identified Republicans now believe that no evolution whatsoever has taken place since the day of creation... The move is big: an 11-point decline since 2009. ... Democrats are slightly more likely to believe in evolution than they were four years ago.
So what happened after 2009 that might be driving Republican views? The answer is obvious, of course: the election of a Democratic president
Wait — is the theory of evolution somehow related to Obama administration policy? Not that I’m aware of... The point, instead, is that Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists. For some time now it has been impossible to be a good Republicans while believing in the reality of climate change; now it’s impossible to be a good Republican while believing in evolution.
And of course the same thing is happening in economics. As recently as 2004, the Economic Report of the President (pdf) of a Republican administration could espouse a strongly Keynesian view..., the report — presumably written by Greg Mankiw — used the “s-word”, calling for “short-term stimulus”.
Given that intellectual framework, the reemergence of a 30s-type economic situation ... should have made many Republicans more Keynesian than before. Instead, at just the moment that demand-side economics became obviously critical, we saw Republicans — the rank and file, of course, but economists as well — declare their fealty to various forms of supply-side economics, whether Austrian or Lafferian or both. ...
And look, this has to be about tribalism. All the evidence ... has pointed in a Keynesian direction; but Keynes-hatred (and hatred of other economists whose names begin with K) has become a tribal marker, part of what you have to say to be a good Republican.

Before the Great Recession, macroeconomists seemed to be converging to a single intellectual framework. In Olivier Blanchard's famous words:

after the explosion (in both the positive and negative meaning of the word) of the field in the 1970s, there has been enormous progress and substantial convergence. For a while - too long a while - the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good.

The recession revealed that the "extremism, herding, and fashion" is much worse than many of us realized, and the rifts that have reemerged and are as strong as ever. What it didn't reveal is how to move beyond this problem. I thought evidence would matter more than it does, but somehow we seem to have lost the ability to distinguish between competing theoretical structures based upon econometric evidence (if we ever had it). The state of macro is not good, and the path to improvement is hard to see, but it must involve a shared agreement over the evidence based means through which the profession on both sides of these debates can embrace or reject particular theoretrical models.

Thursday, December 19, 2013

'More on the Illusion of Superiority'

Simon Wren-Lewis:

More on the illusion of superiority: For economists, and those interested in methodology Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.
My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.
But what about the Lucas critique? ...[continue]...

[In Maui, will post as I can...]

Tuesday, December 17, 2013

'Four Missing Ingredients in Macroeconomic Models'

Antonio Fatas:

Four missing ingredients in macroeconomic models: It is refreshing to see top academics questioning some of the assumptions that economists have been using in their models. Krugman, Brad DeLong and many others are opening a methodological debate about what constitute an acceptable economic model and how to validate its predictions. The role of micro foundations, the existence of a natural state towards the economy gravitates,... are all very interesting debates that tend to be ignored (or assumed away) in academic research.

I would like to go further and add a few items to their list... In random order:

1. The business cycle is not symmetric. ... Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Southern Europe is very close to its actual.

2. ...most academic research is produced around models where small and frequent shocks drive economic fluctuations, as opposed to large and infrequent events. The disconnect comes probably from the fact that it is so much easier to write models with small and frequent shocks than having to define a (stochastic?) process for large events. It gets even worse if one thinks that recessions are caused by the dynamics generated during expansions. Most economic models rely on unexpected events to generate crisis, and not on the internal dynamics that precede the crisis.

[A little bit of self-promotion: my paper with Ilian Mihov on the shape and length of recoveries presents some evidence in favor of these two hypothesis.]

3. There has to be more than price rigidity. ...

4. The notion that co-ordination across economic agents matters to explain the dynamics of business cycles receives very limited attention in academic research. ...

I am aware that they are plenty of papers that deal with these four issues, some of them published in the best academic journals. But most of these papers are not mainstream. Most economists are sympathetic to these assumption but avoid writing papers using them because they are afraid they will be told that their assumptions are ad-hoc and that the model does not have enough micro foundations (for the best criticism of this argument, read the latest post of Simon Wren-Lewis). Time for a change?

On the plucking model, see here and here.

Friday, December 13, 2013

Sticky Ideology

Paul Krugman:

Rudi Dornbusch and the Salvation of International Macroeconomics (Wonkish): ...Ken Rogoff had a very good paper on all this, in which he also says something about the state of affairs within the economics profession at the time:

The Chicago-Minnesota School maintained that sticky prices were nonsense and continued to advance this view for at least another fifteen years. It was the dominant view in academic macroeconomics. Certainly, there was a long period in which the assumption of sticky prices was a recipe for instant rejection at many leading journals. Despite the religious conviction among macroeconomic theorists that prices cannot be sticky, the Dornbusch model remained compelling to most practical international macroeconomists. This divergence of views led to a long rift between macroeconomics and much of mainstream international finance …

There are more than a few of us in my generation of international economists who still bear the scars of not being able to publish sticky-price papers during the years of new neoclassical repression.

Notice that this isn’t the evil Krugman talking; it’s the respectable Rogoff. Yet he too is in effect describing neoclassical macro as a sort of cult, actively suppressing alternative approaches. What he gets wrong is in the part I’ve elided with my “…”, in which he asserts that this is all behind us. As we saw when crisis struck, Chicago/Minnesota had in fact learned nothing and was pretty much unaware of the whole New Keynesian enterprise — and from what I hear about macro hiring, the suppression of ideas at odds with the cult remains in full force. ...

Wednesday, December 04, 2013

'Microfoundations': I Do Not Think That Word Means What You Think It Means

'Microfoundations': I Do Not Think That Word Means What You Think It Means

Brad DeLong responds to my column on macroeconomic models:

“Microfoundations”: I Do Not Think That Word Means What You Think It Means

The basic point is this:

...New Keynesian models with more or less arbitrary micro foundations are useful for rebutting claims that all is for the best macro economically in this best of all possible macroeconomic worlds. But models with micro foundations are not of use in understanding the real economy unless you have the micro foundations right. And if you have the micro foundations wrong, all you have done is impose restrictions on yourself that prevent you from accurately fitting reality.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? ...
After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Tuesday, December 03, 2013

One Model to Rule Them All?

Latest column:

Is There One Model to Rule Them All?: The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether? ...
The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?  - See more at: http://www.thefiscaltimes.com/Columns/2013/12/03/There-One-Economic-Model-Rule-Them-All#sthash.WPTndtm4.dpuf

Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:

Evans

This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Tuesday, November 05, 2013

Do People Have Rational Expectations?

New column:

Do People Have Rational Expectations?, by Mark Thoma

Not always, and economic models need to take this into account.

Saturday, October 12, 2013

'Nominal Wage Rigidity in Macro: An Example of Methodological Failure'

Simon Wren-Lewis:

Nominal wage rigidity in macro: an example of methodological failure: This post develops a point made by Bryan Caplan (HT MT). I have two stock complaints about the dominance of the microfoundations approach in macro. Neither imply that the microfoundations approach is ‘fundamentally flawed’ or should be abandoned: I still learn useful things from building DSGE models. My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by macroeconomists is so out of date. Behavioural economics just does not get a look in. A good and very important example comes from the reluctance of firms to cut nominal wages. There is overwhelming empirical evidence for this phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer Smith at Warwick). The behavioral reasons for this are explored in detail in this book by Truman Bewley, which Bryan Caplan discusses here. Both money illusion and the importance of workforce morale are now well accepted ideas in behavioral economics.

Yet debates among macroeconomists about whether and why wages are sticky go on. ...

While we can debate why this is at the level of general methodology, the importance of this particular example to current policy is huge. Many have argued that the failure of inflation to fall further in the recession is evidence that the output gap is not that large. As Paul Krugman in particular has repeatedly suggested, the reluctance of workers or firms to cut nominal wages may mean that inflation could be much more sticky at very low levels, so the current behavior of inflation is not inconsistent with a large output gap. ... Yet this is hardly a new discovery, so why is macro having to rediscover these basic empirical truths? ...

He goes on to give an example of why this matters (failure to incorporate downward nominal wage rigidity caused policymakers to underestimate the size of the output gap by a large margin, and that led to a suboptimal policy response).

Time for me to catch a plane ...

Tuesday, August 27, 2013

The Real Trouble With Economics: Sociology

Paul Krugman:

The Real Trouble With Economics: I’m a bit behind the curve in commenting on the Rosenberg-Curtain piece on economics as a non-science. What do I think of their thesis?

Well, I’m sorry to say that they’ve gotten it almost all wrong. Only “almost”: they’re entirely right that economics isn’t behaving like a science, and economists – macroeconomists, anyway – definitely aren’t behaving like scientists. But they misunderstand the nature of the failure, and for that matter the nature of such successes as we’re having....

It’s true that few economists predicted the onset of crisis. Once crisis struck, however, basic macroeconomic models did a very good job in key respects — in particular, they did much better than people who relied on their intuitive feelings. The intuitionists — remember, Alan Greenspan was supposed to be famously able to sense the economy’s pulse — insisted that budget deficits would send interest rates soaring, that the expansion of the Fed’s balance sheet would be inflationary, that fiscal austerity would strengthen economies through “confidence”. Meanwhile, wonks who relied on suitably interpreted IS-LM confidently declared that all this intuition, based on experiences in a different environment, would prove wrong — and they were right. From my point of view, these past 5 years have been a triumph for and vindication of economic modeling.

Oh, and it would be a real tragedy if the takeaway from recent events becomes that you should listen to impressive-looking guys with good tailors who stroke their chins and sound wise, and ignore the nerds; the nerds have been mostly right, while the Very Serious People have been wrong every step of the way.

Yet obviously something is deeply wrong with economics. While economists using textbook macro models got things mostly and impressively right, many famous economists refused to use those models — in fact, they made it clear in discussion that they didn’t understand points that had been worked out generations ago. Moreover, it’s hard to find any economists who changed their minds when their predictions, say of sharply higher inflation, turned out wrong. ...

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

I talked about the problem with the sociology of economics awhile back -- this is from a post in August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange's Robert Lucas Roundtable in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless, so we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.

But we have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. For example,... why policymakers didn't take the possibility of a major meltdown seriously. Why didn't they deliver forecasts conditional on a crisis occurring? Why didn't they ask this question of the model? Why did we only get forecasts conditional on no crisis? And also, why was the main factor that allowed the crisis to spread, the interconnectedness of financial markets, missed?

It was because policymakers couldn't and didn't take seriously the possibility that a crisis and meltdown could occur. And even if they had seriously considered the possibility of a meltdown, the models most people were using were not built to be informative on this question. It simply wasn't a question that was taken seriously by the mainstream.

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices.

And here I think that thought leaders such as Robert Lucas and others who openly ridiculed models they disagreed with have questions they should ask themselves (e.g. Mr Lucas saying "At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another", or more recently "These are kind of schlock economics"). When someone as notable and respected as Robert Lucas makes fun of an entire line of inquiry, it influences whole generations of economists away from asking certain types of questions, some of which turned out to be important. Why was it necessary for the major leaders in macroeconomics to shut down alternative lines of inquiry through ridicule and other means rather than simply citing evidence in support of their positions? What were they afraid of? The goal is to find the truth, not win fame and fortune by dominating the debate.

We need to take a close look at how the sociology of our profession led to an outcome where people were made to feel embarrassed for even asking certain types of questions. People will always be passionate in defense of their life's work, so it's not the rhetoric itself that is of concern, the problem comes when factors such as ideology or control of journals and other outlets for the dissemination of research stand in the way of promising alternative lines of inquiry.

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

I think the structure of journals, which concentrates power within the profession, also influence the sociology of the profession (and not in a good way).

Wednesday, August 14, 2013

'Never Channel the Ghosts of Dead Economists as a Substitute for Analysis'

 Nick Rowe checks in with David Laidler:

David Laidler goes meta on "What would Milton have said?": I tried to persuade David Laidler to join us in the econoblogosphere, especially given recent arguments about Milton Friedman. I have not yet succeeded, but David did say I could use these two paragraphs from his email:

However - re. the "what Milton would have said" debate  - When I was just getting started in the UK, I got thoroughly fed up with being told "what Maynard [Keynes] would have said" -- always apparently that the arguments of people like me were nonsense and therefore didn't have to be addressed in substance. I took a vow then never to channel the ghosts of dead economists as a substitute for analysis, and still regard it as binding!
MF was a big supporter of QE for Japan at the end of the '90s. I know that, because one of his clearest expressions of the view was in response to a question I put to him on a video link at a BofC conference. But so was Allan Meltzer at that time, and he is now  (a) virulently opposed to QE for the US and (b) on the record (New York Times, Nov. 4th 2010 "Milton Friedman vs. the Fed.")  as being sure that Milton would have agreed with him. In my personal view (a) demonstrates that even as wise an economist as Meltzer can sometimes give dangerous policy advice, and (b) shows that he knows how to deploy pure speculation to make a rhetorical splash when he does so. Who could possibly know what Milton would have said?  He isn't here.

David Laidler is probably the person best qualified to answer the question "What would Milton have said?", and that's his answer.

Speaking of Meltzer and substitutes for analysis, his last op-ed warns yet again about inflation. Mike Konczal responds:

Denialism and Bad Faith in Policy Arguments, by Mike Konczal: Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”
But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”
He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States. ...
The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.
There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. ...

Tuesday, August 13, 2013

Friedman's Legacy: The New Monetarist's View

I guess we should give the New Monetarists a chance to weigh in on Milton Friedman's legacy and influence (their name -- New Monetarists -- should give you some idea where this is headed...I cut the specific arguments short, but they can be found at the original post):

Friedman's Legacy, by Stephen Williamson, New Monetarist Economics: I'm not sure why, but there has been a lot of blogosphere writing on Milton Friedman recently... Randy Wright once convinced me that we should call ourselves New Monetarists, and we wrote a couple of papers (this one, and this one) in which we try to get a grip on what that means. As New Monetarists, we think we have something to say about Friedman.

We can find plenty of faults in Friedman's ideas, but those ideas - reflected in Friedman's theoretical and empirical work - are deeply embedded in much of what we do as economists in the 21st century. By modern standards, Friedman was a crude economic theorist, but he used the simple tools he had available to develop deep ideas that were later fleshed out in fully-articulated economic models. His empirical work was highly influential and serves as a key reference point for some sub-fields in economics. Some examples:

1. Permanent Income Theory...

2. The Friedman rule: Don't confuse this with the constant money growth rule, which comes from "The Role for Monetary Policy." The "Friedman rule" is the policy rule in the "Optimum Quantity of Money" essay. Basically, the nominal interest rate reflects a distortion. Eliminating that distortion requires reducing the nominal interest rate to zero in all states of the world, and that's what monetary policy should be aimed at doing... We can think of plenty of good reasons why optimal monetary policy could take us away from the Friedman rule in practice, but whenever someone makes an argument for some monetary policy rule, we have to first ask the question: why isn't that rule the Friedman rule? The Friedman rule is fundamental in monetary theory.

3. Monetary history: Friedman and Schwartz's "Monetary History of the United States" was monumental. ...

4. Policy rules: The rule that Friedman wanted central banks to follow was not the Friedman rule, but a constant-money-growth rule... Friedman was successful in getting the rule adopted by central banks in the 1970s and 1980s, but the rule was a practical failure, for reasons that are well-understood. But Friedman got macroeconomists and policymakers thinking about policy rules and how they work. Out of that thinking came ideas about central bank commitment, Taylor rules, inflation targeting, nominal GDP targeting, thresholds, etc., that form the basis for modern analysis of central bank policy.

5. Money and Inflation: ... Friedman played a key role in convincing economists and policymakers that central banks could, and should, control inflation. That seems as natural today as saying that rain falls from the sky, and that's part of Friedman's influence.

6. Narrow banking: I tend to think this was one of Friedman's bad ideas, but it's been very influential. Friedman advocated a 100% reserve requirement in "A Program for Monetary Stability." ...

6. Counterpoint to Keynesian economics: Some people seem to think that Friedman was actually a Keynesian at heart, but he sure got on Tobin's nerves. Criticism is important - it helps to prevent and root out lazy science. Old Keynesian economics was probably much better - e.g. there would have been no "neoclassical synthesis" - because of Friedman.

If anyone wants to argue that Friedman is now unimportant for modern economics, that's like saying Bob Dylan is unimportant for modern music. Today, Bob Dylan is quite willing to climb on a stage and perform with a world-class group of musicians - but it's truly pathetic. Nevertheless, Bob Dylan doesn't get booed off the stage today, because people recognize his importance. In the 1960s, he got people riled up, everyone paid attention, and the world is much different today than it would have been if he had not done the work he did.

Wednesday, August 07, 2013

(1) Numerical Methods, (2) Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman

Robert Waldmann:

...Another thing, what about numerical methods?  Macro was totally taken over by computer simulations. This liberated it (so that anything could happen) but also ruined the fun. When computers were new and scary, simulation based macro was scary and high status. When everyone can do it, setting up a model and simulating just doesn't demonstrate brains as effectively as finding one of the two or three special cases with closed form solutions and then presenting them. Also simulating unrealistic models is really pointless. People end up staring at the computer output and trying to think up stories which explain what went on in the computer. If one is reduced to that, one might as well look at real data. Models which can't be solved don't clarify thought. Since they also don't fit the data, they are really truly madly useless.

And one more:

Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman: Thoma Bait
I might as well be honest. I am posting this here rather than at rjwaldmann.blogspot.com , because I think it is the sort of thing to which Mark Thoma links and my standing among the bears is based entirely on the fact that Thoma occasionally links to me.
I think that Pigou, Samuelson, Solow and Friedman all assumed that the marginal propensity to consume out of wealth must, on average, be higher for nominal creditors than for nominal debtors. I think this is a gross error which shows how the representative consumer (invented by Samuelson) had done devastating damage already by 1960.
The topic is the Pigou effect versus the liquidity trap. ...

Guess I should send you there to read it.

Saturday, June 29, 2013

'DSGE Models and Their Use in Monetary Policy'

Mike Dotsey at the Philadelphia Fed:

DSGE Models and Their Use in Monetary Policy: The past 10 years or so have witnessed the development of a new class of models that are proving useful for monetary policy: dynamic stochastic general equilibrium (DSGE) models. The pioneering central bank, in terms of using these models in the formulation of monetary policy, is the Sveriges Riksbank, the central bank of Sweden. Following in the Riksbank’s footsteps, a number of other central banks have incorporated DSGE models into the monetary policy process, among them the European Central Bank, the Norge Bank (Norwegian central bank), and the Federal Reserve.
This article will discuss the major features of DSGE models and why these models are useful to monetary policymakers. It will indicate the general way in which they are used in conjunction with other tools commonly employed by monetary policymakers. ...

Saturday, June 22, 2013

'Debased Economics'

I need a quick post today, so I'll turn to the most natural blogger I can think of, Paul Krugman:

Debased Economics: John Boehner’s remarks on recent financial events have attracted a lot of unfavorable comment, and they should. ... I mean, he’s the Speaker of the House at a time when economic issues are paramount; shouldn’t he have basic familiarity with simple economic terms?
But the main thing is that he’s clinging to a story about monetary policy that has been refuted by experience about as thoroughly as any economic doctrine of the past century. Ever since the Fed began trying to respond to the financial crisis, we’ve had dire warnings about looming inflationary disaster. When the GOP took the House, it promptly called Bernanke in to lecture him about debasing the dollar. Yet inflation has stayed low, and the dollar has remained strong — just as Keynesians said would happen.
Yet there hasn’t been a hint of rethinking from leading Republicans; as far as anyone can tell, they still get their monetary ideas from Atlas Shrugged.
Oh, and this is another reminder to the “market monetarists”, who think that they can be good conservatives while advocating aggressive monetary expansion to fight a depressed economy: sorry, but you have no political home. In fact, not only aren’t you making any headway with the politicians, even mainstream conservative economists like Taylor and Feldstein are finding ways to advocate tighter money despite low inflation and high unemployment. And if reality hasn’t dented this dingbat orthodoxy yet, it never will.

I'll be offline the rest of today ...

Sunday, June 02, 2013

The Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere

Peter Dorman would like to know if he's wrong:

Why You Don’t See the Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere: Introductory textbooks are supposed to give you simplified versions of the models that professionals use in their own work. The blogosphere is a realm where people from a range of backgrounds discuss current issues often using simplified concepts so everyone can be on the same page.
But while the dominant framework used in introductory macro textbooks is aggregate supply—aggregate demand (AS-AD), it is almost never mentioned in the econ blogs. My guess is that anyone who tried to make an argument about current macropolicy using an AS-AD diagram would just invite snickers. This is not true on the micro side, where it’s perfectly normal to make an argument with a standard issue, partial equilibrium supply and demand diagram. What’s going on here?
I’ve been writing the part of my textbook where I describe what happened in macro during the period from the mid 70s to the mid 00s, and part of the story is the rise of textbook AS-AD. Here’s the line I take:
The dominant macro model, now crystallized in DSGE, is much too complex for intro students. It is based on intertemporal optimization and general equilibrium theory. There is no possible way to explain it to students in their first exposure to economics. But the mainstream has rejected the old income-expenditure models that graced intro texts in the 1970s and were, in skeleton form, the basis for the forecasting models used back in those days. So what to do?
The solution has been to use AS-AD as a placeholder. It allows instructors to talk about both prices and quantities in a rough market context. By putting Y on one axis and P on another, you can locate any macroeconomic outcome in the upper-right quadrant. It gets students “thinking like economists”.
Unfortunately the model is unsound. If you dig into it you find contradictions that can’t be papered over. One example is that the AS curve depends on the idea that input prices for firms systematically lag output prices, but do you really want to argue the theoretical and empirical case for this? Or try the AD assumption that, even as the price level and real output in the economy go up or down, the money supply remains fixed.
That’s why AS-AD is simply a placeholder. It has no intrinsic value as an economic model. No one uses it for policy purposes. It can’t be found in the econ blogs. It’s not a stripped down version of DSGE. Its only role is to occupy student brain cells until the real work of macroeconomic instruction can begin in a more advanced course.
If I’m wrong I’d like to know before I cut off all lines of retreat.

This won't fully answer the question (many DSGE adherents deny the existence of something called an AD curve), but here are a few counterexamples. One from today (here), and two from the past (via Tim Duy here and here).

Update: Paul Krugman comments here.

Wednesday, May 29, 2013

'DSGE + Financial Frictions = Macro that Works?'

This is a brief follow-up to this post from Noah Smith (see this post for the abstract to the Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide paper he discusses):

DSGE + financial frictions = macro that works?: In my last post, I wrote:

So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.

One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened...

This would seem to be a problem.

But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. ... Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.

This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.

The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):

I don't know about you, but to me that looks pretty darn good!
I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful." ...
However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?
It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough.
Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.
So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?

My take on why these models weren't used is a bit different.

My argument all along has been that we had the tools and models to explain what happened, but we didn't understand that this particular combination of models -- standard DSGE augmented by financial frictions -- was the important model to use. As I'll note below, part of the reason was empirical -- the evidenced did matter (though it was not interpreted correctly) -- but the bigger problem was that our arrogance caused us to overlook the important questions.

There are many, many "modules" we can plug into a model to make it do various things. Need to propagate a shock, i.e. make it persist over time? Toss in an adjustment cost of some sort (there are other ways to do this as well). Do you need changes in monetary policy to affect real output? Insert a price, wage, or information friction. And so on.

Unfortunately, adding every possible complication to make one grand model that explains everything is way too hard and complex. That's not possible. Instead, depending upon the questions we ask, we put these pieces together in particular ways to isolate the important relationships, and ignore the more trivial ones. This is the art of model building, to isolate what is important and provide insight into the question of interest.

We could have put the model described above together before the crisis, all of the pieces were there, and some people did things along these lines. But this was not the model most people used. Why? Because we didn't think the question was important. We didn't think that financial frictions were an important feature of modern business cycles because technology and deregulation had mostly solved this problem. If the banking system couldn't collapse, why build and emphasize models that say it will? (The empirical evidence for the financial frictions channel was a bit wobbly, and that was also part of the reason these models were not emphasized. But that evidence was based upon normal times, not deep recessions, and it didn't tell us as much as we thought about the usefulness of models that incorporate financial frictions.)

Ex-post, it's easy to look back and say aha -- this was the model that would have worked. Ex-ante, the problem is much harder. Will the next big recession be driven by a financial collapse? If so, then a model like this might be useful. But what if the shock comes from some other source? Is that shock in the model? When the time comes, will we be asking the right questions, and hence building models that can help to answer them, or will we be focused on the wrong thing -- fighting the last war? We have the tools and techniques to build all sorts of models, but they won't do us much good if we aren't asking the right questions.

How do we do that? We must have a strong sense of history, I think, at a minimum be able to look back and understand how various economic downturns happened and be sure those "modules" are in the baseline model. And we also need to have the humility to understand that we probably haven't progressed so much that it (e.g. a financial collapse) can't happen again. History alone is not enough, of course, new things can always happen -- things where history provides little guidance -- but we should at least incorporate things we know can be problematic.

It wasn't our tools and techniques that failed us prior to the Great Recession. It was our arrogance, our belief that we had solved the problem of financial meltdowns through financial innovation, deregulation, and the like that closed our eyes to the important questions we should have been asking. We are asking them now, but is that enough? What else should we be asking?

'Inflation in the Great Recession and New Keynesian Models'

DSGE models are "surprisingly accurate":

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide: It has been argued that existing DSGE models cannot properly account for the evolution of key macroeconomic variables during and following the recent Great Recession, and that models in which inflation depends on economic slack cannot explain the recent muted behavior of inflation, given the sharp drop in output that occurred in 2008-09. In this paper, we use a standard DSGE model available prior to the recent crisis and estimated with data up to the third quarter of 2008 to explain the behavior of key macroeconomic variables since the crisis. We show that as soon as the financial stress jumped in the fourth quarter of 2008, the model successfully predicts a sharp contraction in economic activity along with a modest and more protracted decline in inflation. The model does so even though inflation remains very dependent on the evolution of both economic activity and monetary policy. We conclude that while the model considered does not capture all short-term fluctuations in key macroeconomic variables, it has proven surprisingly accurate during the recent crisis and the subsequent recovery. [pdf]

Saturday, May 25, 2013

'The Hangover Theory'

Robert Waldmann's comments on the response to Michael Kinsley remind me of this old article from Paul Krugman (I've posted this before, but it seems like a good time to post it again -- it was written in 1998 and it foreshadows/debunks many of the bad arguments used to justify austerity, etc.):

The Hangover Theory: A few weeks ago, a journalist devoted a substantial part of a profile of yours truly to my failure to pay due attention to the "Austrian theory" of the business cycle--a theory that I regard as being about as worthy of serious study as the phlogiston theory of fire. Oh well. But the incident set me thinking--not so much about that particular theory as about the general worldview behind it. Call it the overinvestment theory of recessions, or "liquidationism," or just call it the "hangover theory." It is the idea that slumps are the price we pay for booms, that the suffering the economy experiences during a recession is a necessary punishment for the excesses of the previous expansion.
The hangover theory is perversely seductive--not because it offers an easy way out, but because it doesn't. It turns the wiggles on our charts into a morality play, a tale of hubris and downfall. And it offers adherents the special pleasure of dispensing painful advice with a clear conscience, secure in the belief that they are not heartless but merely practicing tough love.
Powerful as these seductions may be, they must be resisted--for the hangover theory is disastrously wrongheaded. Recessions are not necessary consequences of booms. They can and should be fought, not with austerity but with liberality--with policies that encourage people to spend more, not less. Nor is this merely an academic argument: The hangover theory can do real harm. Liquidationist views played an important role in the spread of the Great Depression--with Austrian theorists such as Friedrich von Hayek and Joseph Schumpeter strenuously arguing, in the very depths of that depression, against any attempt to restore "sham" prosperity by expanding credit and the money supply. And these same views are doing their bit to inhibit recovery in the world's depressed economies at this very moment.
The many variants of the hangover theory all go something like this: In the beginning, an investment boom gets out of hand. Maybe excessive money creation or reckless bank lending drives it, maybe it is simply a matter of irrational exuberance on the part of entrepreneurs. Whatever the reason, all that investment leads to the creation of too much capacity--of factories that cannot find markets, of office buildings that cannot find tenants. Since construction projects take time to complete, however, the boom can proceed for a while before its unsoundness becomes apparent. Eventually, however, reality strikes--investors go bust and investment spending collapses. The result is a slump whose depth is in proportion to the previous excesses. Moreover, that slump is part of the necessary healing process: The excess capacity gets worked off, prices and wages fall from their excessive boom levels, and only then is the economy ready to recover.
Except for that last bit about the virtues of recessions, this is not a bad story about investment cycles. Anyone who has watched the ups and downs of, say, Boston's real estate market over the past 20 years can tell you that episodes in which overoptimism and overbuilding are followed by a bleary-eyed morning after are very much a part of real life. But let's ask a seemingly silly question: Why should the ups and downs of investment demand lead to ups and downs in the economy as a whole? Don't say that it's obvious--although investment cycles clearly are associated with economywide recessions and recoveries in practice, a theory is supposed to explain observed correlations, not just assume them. And in fact the key to the Keynesian revolution in economic thought--a revolution that made hangover theory in general and Austrian theory in particular as obsolete as epicycles--was John Maynard Keynes' realization that the crucial question was not why investment demand sometimes declines, but why such declines cause the whole economy to slump.
Here's the problem: As a matter of simple arithmetic, total spending in the economy is necessarily equal to total income (every sale is also a purchase, and vice versa). So if people decide to spend less on investment goods, doesn't that mean that they must be deciding to spend more on consumption goods--implying that an investment slump should always be accompanied by a corresponding consumption boom? And if so why should there be a rise in unemployment?
Most modern hangover theorists probably don't even realize this is a problem for their story. Nor did those supposedly deep Austrian theorists answer the riddle. The best that von Hayek or Schumpeter could come up with was the vague suggestion that unemployment was a frictional problem created as the economy transferred workers from a bloated investment goods sector back to the production of consumer goods. (Hence their opposition to any attempt to increase demand: This would leave "part of the work of depression undone," since mass unemployment was part of the process of "adapting the structure of production.") But in that case, why doesn't the investment boom--which presumably requires a transfer of workers in the opposite direction--also generate mass unemployment? And anyway, this story bears little resemblance to what actually happens in a recession, when every industry--not just the investment sector--normally contracts.
As is so often the case in economics (or for that matter in any intellectual endeavor), the explanation of how recessions can happen, though arrived at only after an epic intellectual journey, turns out to be extremely simple. A recession happens when, for whatever reason, a large part of the private sector tries to increase its cash reserves at the same time. Yet, for all its simplicity, the insight that a slump is about an excess demand for money makes nonsense of the whole hangover theory. For if the problem is that collectively people want to hold more money than there is in circulation, why not simply increase the supply of money? You may tell me that it's not that simple, that during the previous boom businessmen made bad investments and banks made bad loans. Well, fine. Junk the bad investments and write off the bad loans. Why should this require that perfectly good productive capacity be left idle?
The hangover theory, then, turns out to be intellectually incoherent; nobody has managed to explain why bad investments in the past require the unemployment of good workers in the present. Yet the theory has powerful emotional appeal. Usually that appeal is strongest for conservatives, who can't stand the thought that positive action by governments (let alone--horrors!--printing money) can ever be a good idea. Some libertarians extol the Austrian theory, not because they have really thought that theory through, but because they feel the need for some prestigious alternative to the perceived statist implications of Keynesianism. And some people probably are attracted to Austrianism because they imagine that it devalues the intellectual pretensions of economics professors. But moderates and liberals are not immune to the theory's seductive charms--especially when it gives them a chance to lecture others on their failings.
Few Western commentators have resisted the temptation to turn Asia's economic woes into an occasion for moralizing on the region's past sins. How many articles have you read blaming Japan's current malaise on the excesses of the "bubble economy" of the 1980s--even though that bubble burst almost a decade ago? How many editorials have you seen warning that credit expansion in Korea or Malaysia is a terrible idea, because after all it was excessive credit expansion that created the problem in the first place?
And the Asians--the Japanese in particular--take such strictures seriously. One often hears that Japan is adrift because its politicians refuse to make hard choices, to take on vested interests. The truth is that the Japanese have been remarkably willing to make hard choices, such as raising taxes sharply in 1997. Indeed, they are in trouble partly because they insist on making hard choices, when what the economy really needs is to take the easy way out. The Great Depression happened largely because policy-makers imagined that austerity was the way to fight a recession; the not-so-great depression that has enveloped much of Asia has been worsened by the same instinct. Keynes had it right: Often, if not always, "it is ideas, not vested interests, that are dangerous for good or evil."