Category Archive for: Macroeconomics [Return to Main]

Tuesday, December 18, 2012

Is Macro Rotten?

Paul Krugman, quoted below, started this off (or perhaps better, continued an older discussion) by claiming the state of macro is rotten. Steve Williamson, also quoted below, replied and this is Simon Wren-Lewis' reply to Williamson (remember that, as Simon Wren-Lewis notes below, he has defended the modern approach to macro).

This pretty well covers my views, and I think this part of the Wren-Lewis rebuttal gets at the heart of the issue: "You would not think of suggesting that Paul Krugman is out of touch unless you are in effect dismissing or marginalizing this whole line of research." I am also very much in agreement with the "two unhelpful biases" he notes in the last paragraph, and have been thinking of writing more about the first, "too much of an obsession with microfoundation purity, and too little interest in evidence," particularly the lack of interest in using empirical evidence to test and reject models. (Though there are ways to get around this problem, it may be that such tests have fallen out of favor in macro since we only have historical data to work with, and it's folly to build a model with knowledge of the data and then test to see if the model fits. Of course it will fit, or at least it should. That would explain why there appears to be a greater reliance upon logic, intuition, and consistency with micro foundations than in the past. It seems like today models are more likely to be rejected for lack of internal theoretical consistency than for lack of consistency with the empirical evidence):

The New Classical Revolution: Technical or Ideological?, by Simon Wren-Lewis:
Paul Krugman: The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged
The cult here is freshwater macro, which descends from the New Classical revolution. In response
Steve Williamson: “At the time, this revolution was widely-misperceived as a fundamentally conservative movement. It was actually a nerd revolution.” “What these people had on their side were mathematics, econometrics, and most of all the power of economic theory. There was nothing weird about what these nerds were doing - they were simply applying received theory to problems in macroeconomics. Why could that be thought of as offensive?”
The New Classical revolution was clearly anti-Keynesian..., but was that simply because Keynesian theory was the dominant paradigm? ...
I certainly think that New Classical economists revolutionized macroeconomic theory, and that the theory is much better for it. Paul Krugman (PK) and I have disagreed on this point before. ...

But this is not where the real disagreement between PK and SW lies. The New Classical revolution became the New Neoclassical Synthesis, with New Keynesian theory essentially taking the ideas of the revolutionaries and adapting Keynesian theory to incorporate them. Once again, I believe this was a progressive change. While there is plenty wrong with New Keynesian theory, and the microfoundations project on which it is based, I would much rather start from there than with the theory I was taught in the 1970s. As SW says “Most of us now speak the same language, and communication is good.” ...
I think the difficulty that PK and I share is with those who in effect rejected or ignored the New Neoclassical Synthesis. I can think of no reason why the New Classical economist as ‘revolutionary nerd’ should do this, which suggests that SW’s characterization is only half true. Everyone can have their opinion about particular ideas or developments, but it is not normal to largely ignore what one half of the profession is doing. Yet that seems to be what has happened in significant parts of academia.
SW likes to dismiss PK as being out of touch with current macro research. Lets look at the evidence. PK was very much at the forefront of analyzing the Zero Lower Bound problem, before that problem hit most of the world. While many point to Mike Woodford’s Jackson Hole paper as being the intellectual inspiration behind recent changes at the Fed, the technical analysis can be found in Eggertsson and Woodford, 2003. That paper’s introduction first mentions Keynes, and then Krugman’s 1998 paper on Japan. Subsequently we have Eggertsson and Krugman (2010), which is part of a flourishing research program that adds ‘financial frictions’ into the New Keynesian model. You would not think of suggesting that PK is out of touch unless you are in effect dismissing or marginalizing this whole line of research.[2]
I would not describe the state of macro as rotten, because that appears to dismiss what most mainstream macroeconomists are doing. I would however describe it as suffering from two unhelpful biases. The first is methodological: too much of an obsession with microfoundation purity, and too little interest in evidence. The second is ideological: a legacy of the New Classical revolution that refuses to acknowledge the centrality of Keynesian insights to macroeconomics. These biases are a serious problem, partly because they can distort research effort, but also because they encourage policy makers to make major mistakes.[3]
Footnotes
[1] The clash between Monetarism and Keynesianism was mostly a clash about policy: Friedman used the Keynesian theoretical framework, and indeed contributed greatly to it.

[2] It may be legitimate to suggest someone is out of touch with macro theory if they make statements that are just inconsistent with mainstream theory, without acknowledging this to be the case. The example that most obviously comes to mind is statements like these, about the impact of fiscal policy.

[3] In the case of the UK, a charitable explanation for the Conservative opposition to countercyclical fiscal policy and their embrace of austerity was that they believed conventional monetary policy could always stabilize the economy. If they had taken on board PK’s analysis of Japan, or Eggertsson and Woodford, they would not have made that mistake.
Update: Noah Smith also comments.

Sunday, December 16, 2012

'Mistaking Models for Reality'

Simon Wren-Lewis takes issue with Stephen Williamson's claim that "there are good reasons to think that the welfare losses from wage/price rigidity are small":

Mistaking models for reality, by Simon Wren-Lewis: In a recent post, Paul Krugman used a well known Tobin quote: it takes a lot of Harberger triangles to fill an Okun gap. For non-economists, this means that the social welfare costs of resource misallocations because prices are ‘wrong’ (because of monopoly, taxation etc) are small compared to the costs of recessions. Stephen Williamson takes issue with this idea. His argument can be roughly summarized as follows:

1) Keynesian recessions arise because prices are sticky, and therefore 'wrong', so their costs are not fundamentally different from resource misallocation costs.

2) Models of price stickiness exaggerate these costs, because their microfoundations are dubious.

3) If the welfare costs of price stickiness were significant, why are they not arbitraged away?

I’ve heard these arguments, or variations on them, many times before.[1] So lets see why they are mistaken...

But I want to focus on this. How useful are representative agent models, e.g. New Keynesian models, for examining questions such as the costs of unemployment?:

Lets move from wage and price stickiness to the major cost of recessions: unemployment. The way that this is modeled in most New Keynesian set-ups based on representative agents is that workers cannot supply as many hours as they want. In that case, workers suffer the cost of lower incomes, but at least they get the benefit of more leisure. Here is a triangle maybe (see Nick Rowe again.) Now this grossly underestimates the cost of recessions. One reason is  heterogeneity: many workers carry on working the same number of hours in a recession, but some become unemployed. Standard consumer theory tells us this generates larger aggregate costs, and with more complicated models this can be quantified. However the more important reason, which follows from heterogeneity, is that the long term unemployed typically do not think that at least they have more leisure time, so they are not so badly off. Instead they feel rejected, inadequate, despairing, and it scars them for life. Now that may not be in the microfounded models, but that does not make these feelings disappear, and certainly does not mean they should be ignored.

It is for this reason that I have always had mixed feelings about representative agent models that measure the costs of recessions and inflation in terms of the agent’s utility.[2] In terms of modeling it has allowed business cycle costs to be measured using the same metric as the costs of distortionary taxation and under/over provision of public goods, which has been great for examining issues involving fiscal policy, for example. Much of my own research over the last decade has used this device. But it does ignore the more important reasons why we should care about recessions. Which is perhaps OK, as long as we remember this. The moment we actually think we are capturing the costs of recessions using our models in this way, we once again confuse models with reality.

What does me mean by confusing models with reality?:

The problem with modeling price rigidity is that there are too many plausible reasons for this rigidity - too many microfoundations. (Alan Blinder’s work is a classic reference here.) Microfounded models typically choose one for tractability. It is generally possible to pick holes in any particular tractable story behind price rigidity (like Calvo contracts). But it does not follow that these models of Keynesian business cycles exaggerate the size of recessions. It seems much more plausible to argue completely the opposite: because microfounded models typically only look at one source of nominal rigidity, they underestimate its extent and costs.

I could make the same point in a slightly different way. Lets suppose that we do not fully understand what causes recessions. What we do understand, in the simple models we use, accounts for small recessions, but not large ones. Therefore, large recessions cannot exist. The logic is obviously faulty, but too many economists argue this way. There appears to be a danger in only ‘modeling what we understand’ that modelers can go on to confuse models with reality.

Let me add that while this is a good argument for why the measured costs only establish a minimum bound for the total costs, I am not sure we can be confident they do that. The reason is that I am not convinced that wage and price rigidities as modeled in the New Keynesian framework adequately capture the transmission mechanism from shocks to real effects that propelled us into the Great Recession. That is, do we really think that wage and price rigidities of the Calvo variety (or of the Rotemberg variety) are the main friction behind the downturn and struggle to recover? If prices were perfectly flexible, would our problems be over? Would they have never begun in the first place? More flexibility in housing prices might help, but the problem was a breakdown in financial intermediation which in turn caused problems for the real sector. Capturing these effects requires abandoning the representative agent framework, connecting the real and financial sectors, and then endogenizing financial cycles. There is progress on this front, but in my view existing models are simply unable to adequately capture these effects.

If this is true, if existing models do not adequately capture the transmission of financial shocks to changes in output and employment, if our models miss a fundamental mechanism at work in the recession, why should we believe estimates of fiscal multipliers, welfare effects, and so on based upon models that assume shocks are transmitted through moderate price rigidities? I think these models are good at capturing mild business cycles like we experienced during the Great Moderation, but I question their value in large, persistent, recessions induced by large financial shocks.

[For more on macro models, see Paul Krugman's The Dismal State of the Dismal Science and the links he provides in his discussion.]

Monday, November 26, 2012

Lucas Interview

Stephen Williamson notes an interview of Robert Lucas:

SED Newsletter: Lucas Interview: The November 2012 SED Newsletter has ... an interview with Robert Lucas, which is a gem. Some excerpts:

... Microfoundations:

ED: If the economy is currently in an unusual state, do micro-foundations still have a role to play?
RL: "Micro-foundations"? We know we can write down internally consistent equilibrium models where people have risk aversion parameters of 200 or where a 20% decrease in the monetary base results in a 20% decline in all prices and has no other effects. The "foundations" of these models don't guarantee empirical success or policy usefulness.
What is important---and this is straight out of Kydland and Prescott---is that if a model is formulated so that its parameters are economically-interpretable they will have implications for many different data sets. An aggregate theory of consumption and income movements over time should be consistent with cross-section and panel evidence (Friedman and Modigliani). An estimate of risk aversion should fit the wide variety of situations involving uncertainty that we can observe (Mehra and Prescott). Estimates of labor supply should be consistent aggregate employment movements over time as well as cross-section, panel, and lifecycle evidence (Rogerson). This kind of cross-validation (or invalidation!) is only possible with models that have clear underlying economics: micro-foundations, if you like.

This is bread-and-butter stuff in the hard sciences. You try to estimate a given parameter in as many ways as you can, consistent with the same theory. If you can reduce a 3 orders of magnitude discrepancy to 1 order of magnitude you are making progress. Real science is hard work and you take what you can get.

"Unusual state"? Is that what we call it when our favorite models don't deliver what we had hoped? I would call that our usual state.

Friday, November 23, 2012

'Imagine Economists had Warned of a Financial Crisis'

Chris Dillow:

... Imagine economists had widely and credibly warned of a financial crisis in the mid-00s. People would have responded to such warnings by lending less and borrowing less (I'm ignoring agency problems here). But this would have resulted in less gearing and so no crisis. There would now be a crisis in economics as everyone wondered why the disaster we predicted never happened. ...
His main point, however, revolves around Keynes' statement that "If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid":
I suspect there's another reason why economics is thought to be in crisis. It's because, as Coase says, (some? many?) economists lost sight of ordinary life and people, preferring to be policy advisors, theorists or - worst of all - forecasters.

In doing this, many stopped even trying to pursue Keynes' goal. What sort of reputation would dentists have if they stopped dealing with people's teeth and preferred to give the government advice on dental policy, tried to forecast the prevalence of tooth decay or called for new ways of conceptualizing mouths?

Perhaps, then, the problem with economists is that they failed to consider what function the profession can reasonably serve.

Tuesday, November 06, 2012

'Four Top Economists Huddled Round a Lunch Table'

Aditya Chakrabortty:

...As one of the 10 most expensive private colleges in the US, Carnegie Mellon in Pittsburgh almost oppresses visitors with neo-gothic grandness... I was a guest of Carol Goldburg, the director of CMU's undergraduate economics program, who had gathered a few colleagues to give their take on the presidential election. Here were four top economists huddled round a lunch table: they were surely going to regale me with talk of labor-market policy, global imbalances, marginal tax rates.
My opener was an easy ball: how did they think President Obama had done? Sevin Yeltekin, an expert on political economy, was the first to respond: "He hasn't delivered on a lot of his promises, but he inherited a big mess. I'd give him a solid B."
I threw the same question to her neighbor and one of America's most renowned rightwing economists, Allan Meltzer. He snapped: "A straight F: he took a mess and made it even bigger." Then came Goldburg, now wearing the look of a hostess whose guests are falling out: "Well, I'm concerned about welfare and poverty, and Obama's tried hard on those issues." A tentative pause. "B-minus?"
Finally it was the turn of Bennett McCallum, author of such refined works as Multiple-Solution Indeterminacies in Monetary Policy Analysis. Surely he would bring the much-needed technical ballast? Um, no. "D: he's trying to turn this country into France."
Some of these comments were surely made for the benefit of their audience: faced with a mere scribbler, the scholars had evidently decided to hold the algebra, and instead talk human. Even so, this was a remarkable row. Here were four economists on the same faculty, who probably taught some of the same students; yet Obama's reputation depended on entirely on who was doing the assessment. The president was either B or F, good or a failure: opposite poles with no middle ground, and not even a joint agreement of the judging criteria. ...

Monday, November 05, 2012

Maurizio Bovi: Are You a Good Econometrician? No, I am British (With a Response from George Evans)

Via email, Maurizio Bovi describes a paper of his on adaptive learning (M. Bovi (2012). "Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?" Journal of Economic Dynamics and Control). A colleague of mine, George Evans -- a leader in this area -- responds:

Are you a good econometrician? No, I am British!, by Maurizio Bovi*: A typical assumption of mainstream strands of research is that agents’ expectations are grounded in efficient econometric models. Muthian agents are all equally rational and know the true model. The adaptive learning literature assumes that agents are boundedly rational in the sense that they are as smart as econometricians and that they are able to learn the correct model. The predictor choice approach argues that individuals are boundedly rational in the sense that agents switch to the forecasting rule that has the highest fitness. Preferences could generate enduring inertia in the dynamic switching process and a stationary environment for a sufficiently long period is necessary to learn the correct model. Having said this, all the cited approaches typically argue that there is a general tendency to forecast via optimal forecasting models because of the costs stemming from inefficient predictions.
To the extent that the representative agent’s beliefs i) are based on efficient (in terms of minimum MSE=mean squared forecasting errors) econometric models, and ii) can be captured by ad hoc surveys, two basic facts emerge, stimulating my curiosity. First, in economic systems where the same simple model turns out to be the best predictor for a sufficient span of time survey expectations should tend to converge: more and more individuals should learn or select it. Second, the forecasting fitness of this enduring minimum MSE econometric model should not be further enhanced by the use of information provided by survey expectations. If agents act as if they were statisticians in the sense that they use efficient forecasting rules, then survey-based beliefs must reflect this and cannot contain any statistically significant information that helps reduce the MSE relative to the best econometric predictor. In sum, there could be some value in analyzing hard data  and survey beliefs to understand i) whether these latter derive from optimal econometric models and ii) the time connections between survey-declared and efficient model-grounded expectations. By examining real-time GDP dynamics in the UK I have found that, over a time-span of two decades, the adaptive expectations (AE) model systematically outperforms other standard predictors which, as argued by the above recalled literature, should be in the tool-box of representative econometricians (Random Walk, ARIMA, VAR). As mentioned, this peculiar environment should eventually lead to increased homogeneity in best-model based expectations. However data collected in the surveys managed by the Business Surveys Unit of the European Commission (European Commission, 2007) highlight that great variety in expectations persists. Figure 1 shows that in the UK the number of optimists and pessimists tend to be rather similar at least since the inception of data1 availability (1985).

Bovi

In addition, evidence points to one-way information flows going from survey data to econometric models. In particular, Granger-causality, variance decomposition and Geweke’s instantaneous feedback tests suggest that the accuracy of the AE forecasting model can be further enhanced by the use of the information provided by the level of disagreement across survey beliefs. That is, as per GDP dynamics in the UK, the expectation feedback system looks like an open loop where possibly non-econometrically based beliefs play a key role with respect to realizations. All this affects the general validity of the widespread assumption that representative agents’ beliefs derive from optimal econometric models.
Results are robust to several methods of quantifications of qualitative survey observations as well as to standard forecasting rules estimated both recursively and via optimal-size rolling windows. They are also in line both with the literature supporting the non-econometrically-based content of the information captured by surveys carried out on laypeople and, interpreting MSE as a measure of volatility, with the stylized fact on the positive correlation between dispersion in beliefs and macroeconomic uncertainty.
All in all, our evidence raises some intriguing questions: Why do representative UK citizens seem to be systematically more boundedly rational than what is usually hypothesized in the adaptive learning literature and the predictor choice approach? What does it persistently hamper them to use the most accurate statistical model? Are there econometric (objective) or psychological (subjective) impediments?
____________________
*Italian National Institute of Statistics (ISTAT), Department of Forecasting and Economic Analysis. The opinions expressed herein are those of the author (E-mail mbovi@istat.it) and do not necessarily reflect the views of ISTAT.
[1] The question is “How do you expect the general economic situation in the country to develop over the next 12 months?” Respondents may reply “it will…: i) get a lot better, ii) get a little better, iii) stay the same, iv) get a little worse, v) get a lot worse, vi) I do not know. See European Commission (1997).
References
European Commission (2007). The Joint Harmonised EU Programme of Business and Consumer Surveys, User Guide, European Commission, Directorate-General for Economic and Financial Affairs, July.
M. Bovi (2012). “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?” Journal of Economic Dynamics and Control DOI: 10.1016/j.jedc.2012.10.005.

Here's the response from George Evans:

Comments on Maurizio Bovi, “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?”, by George Evans, University of Oregon: This is an interesting paper that has a lot of common ground with the adaptive learning literature. The techniques and a number of the arguments will be familiar to those of us who work in adaptive learning. The tenets of the adaptive learning approach can be summarized as follows: (1) Fully “rational expectations” (RE) are implausibly strong and implicitly ignores a coordination issue that arises because economic outcomes are affected by the expectations of firms and households (economic “agents”). (2) A more plausible view is that agents have bounded rationality with a degree of rationality comparable to economists themselves (the “cognitive consistency principle”). For example agents’ expectations might be based on statistical models that are revised and updated over time. On this approach we avoid assuming that agents are smarter than economists, but we also recognize that agents will not go on forever making systematic errors. (3) We should recognize that economic agents, like economists, do not agree on a single forecasting model. The economy is complex. Therefore, agents are likely to use misspecified models and to have heterogeneous expectations.
The focus of the adaptive learning literature has changed over time. The early focus was on whether agents using statistical learning rules would or would not eventually converge to RE, while the main emphasis now is on the ways in which adaptive learning can generate new dynamics, e.g. through discounting of older data and/or switching between forecasting models over time. I use the term “adaptive learning” broadly, to include, for example, the dynamic predictor selection literature.
Bovi’s paper “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models” argues that with respect to GDP growth in the UK the answer to his question is no because 1) there is a single efficient econometric model, which is a version of AE (adaptive expectations), and 2) agents might be expected therefore to have learned to adopt this optimal forecasting model over time. However the degree of heterogeneity of expectations has not fallen over time, and thus agents are failing to learn to use the best forecasting model.
From the adaptive learning perspective, Bovi’s first result is intriguing, and merits further investigation, but his approach will look very familiar to those of us who work in adaptive learning. And the second point will surprise few of us: the extent of heterogeneous expectations is well-known, as is the fact that expectations remain persistently heterogeneous, and there is considerable work within adaptive learning that models this heterogeneity.
More specifically:
1) Bovi’s “efficient” model uses AE with the adaptive expectations parameter gamma updated over time in a way that aims to minimize the squared forecast error. This is in fact a simple adaptive learning model, which was proposed and studied in Evans and Ramey, “Adaptive expectations, underparameterization and the Lucas critique”, Journal of Monetary Economics (2006). We there suggested that agents might want to use AE as an optimal choice for a parsimonious (underparameterized) forecasting rule, showed what would determine the optimal choice of gamma, and provided an adaptive learning algorithm that would allow agents to update their choice of gamma over time in order to track unknown structural change. (Our adaptive learning rule exploits the fact that AE can be viewed as the forecast that arises from an IMA(1,1) time-series model, and in our rule the MA parameter is estimated and updated recursively using a constant gain rule.)
2) At the same time I am suspicious that economists will agree that there is a single best way to forecast GDP growth. For the US there is a lot of work by numerous researchers that strongly indicates that choosing between univariate time-series models is controversial, i.e. there appears to be no single clearly best univariate forecasting model, and (ii) forecasting models for GDP growth should be multivariate and should include both current & lagged unemployment rates and the consumption to GDP ratio. Other forecasters have found a role for nonlinear (Markov-switching) dynamics. Thus I doubt that there will be agreement by economists on a single best forecasting model for GDP growth or other key macro variables. Hence we should expect households and firms also to entertain multiple forecasting models, and for different agents to use different models.
3) Even if there were a single forecasting model that clearly dominated, one would not expect homogeneity of expectations across agents or for heterogeneity to disappear over time. In Evans and Honkapohja, “Learning as a Rational Foundation for Macroeconomics and Finance”, forthcoming 2013 in R Frydman and E Phelps, Rethinking Expectations: The Way Forward for Macroeconomics, we point out that variations across agents in the extent of discounting and the frequency with which agents update parameter estimates, as well as the inclusion of idiosyncratic exogenous expectation shocks, will give rise to persistent heterogeneity. There are costs to forecasting, and some agents will have larger benefits from more accurate forecasts than other agents. For example, for some agents the forecast method advocated by Bovi will be too costly and an even simpler forecast will be adequate (e.g. a RW forecast that the coming year will be like last year, or a forecast based on mean growth over, say, the last five years).
4) When there are multiple models potentially in play, as there always is, the dynamic predictor selection approach initiated by Brock and Hommes means that because of varying costs of forecast methods, and heterogeneous costs across agents, not all agents will want to use what appears to be the best performing model. We therefore expect heterogeneous expectations at any moment in time. I do not regard this as a violation of the cognitive consistency principle – even economists will find that in some circumstances in their personal decision-making they use more boundedly rational forecast methods than in other situations in which the stakes are high.
In conclusion, here is my two sentence summary for Maurizio Bovi: Your paper will find an interested audience among those of us who work in this area. Welcome to the adaptive learning approach. 
George Evans

Wednesday, October 31, 2012

'The Role of Money in New-Keynesian Models'

Should New Keynesian models include a specific role for money (over and above specifying the interest rate as the policy variable)? This is a highly wonkish, but mostly accessible explanation from Bennett McCallum:

The Role of Money in New-Keynesian Models, by Bennett T. McCallum, Carnegie Mellon University, National Bureau of Economic Research, N° 2012-019 Serie de Documentos de Trabajo Working Paper series Octubre 2012

Here's the bottom line:

...we drew several conclusions supportive of the idea that a central bank that ignores money and banking will seriously misjudge the proper interest rate policy action to stabilize inflation in response to a productivity shock in the production function for output. Unfortunately, some readers discovered an error; we made a mistake in linearization that, when corrected, greatly diminished the magnitude of some of the effects of including the banking sector. There seems now to be some interest in developing improved models of this type. Marvin Goodfriend (MG) is working with a PhD student in this topic. At this point I have not been able to give a convincing argument that one needs to include M. ...
There is one respect in which it is nevertheless the case that a rule for the monetary base is superior to a rule for the interbank interest rate. In this context we are clearly discussing the choice of a controllable instrument variable—not one of the "target rules" favored by Svensson and Woodford, which are more correctly called "targets." Suppose that the central bank desires for its rule to be verifiable by the public. Then it will arguably need to be a non-activist rule, one that normally keeps the instrument setting unchanged over long spans of time. In that case we know that in the context of a standard NK model, an interest rate instrument will not be viable. That is, the rule will not satisfy the Taylor Principle, which is necessary for "determinacy." The latter condition is not, I argue, what is crucial for well-designed monetary policy, but LS learnability is, and it is not present when the TP is not satisfied. This is well known from, e.g., Evans and Honkapohja (2001), Bullard and Mitra (2002), McCallum (2003, 2009). ...

Monday, October 29, 2012

What’s the Use of Economics?

Alan Kirman on how macroeconomics needs to change (I'm still thinking about his idea that the economy should be modeled as "a system which self organizes, experiencing sudden and large changes from time to time"):

What’s the use of economics?, by Alan Kirman, Vox EU: The simple question that was raised during a recent conference organized by Diane Coyle at the Bank of England was to what extent has - or should - the teaching of economics be modified in the light of the current economic crisis? The simple answer is that the economics profession is unlikely to change. Why would economists be willing to give up much of their human capital, painstakingly nurtured for over two centuries? For macroeconomists in particular, the reaction has been to suggest that modifications of existing models to take account of ‘frictions’ or ‘imperfections’ will be enough to account for the current evolution of the world economy. The idea is that once students have understood the basics, they can be introduced to these modifications.

A turning point in economics

However, other economists such as myself feel that we have finally reached the turning point in economics where we have to radically change the way we conceive of and model the economy. The crisis is an opportune occasion to carefully investigate new approaches. Paul Seabright hit the nail on the head; economists tend to inaccurately portray their work as a steady and relentless improvement of their models whereas, actually, economists tend to chase an empirical reality that is changing just as fast as their modeling. I would go further; rather than making steady progress towards explaining economic phenomena professional economists have been locked into a narrow vision of the economy. We constantly make more and more sophisticated models within that vision until, as Bob Solow put it, “the uninitiated peasant is left wondering what planet he or she is on” (Solow 2006).

In this column, I will briefly outline some of the problems the discipline of economics faces; problems that have been shown up in stark relief during the current crisis. Then I will come back to what we should try to teach students of economics.

Entrenched views on theory and reality

The typical attitude of economists is epitomized by Mario Draghi, President of the European Central Bank. Regarding the Eurozone crisis, he said:

“The first thing that came to mind was something that people said many years ago and then stopped saying it: The euro is like a bumblebee. This is a mystery of nature because it shouldn’t fly but instead it does. So the euro was a bumblebee that flew very well for several years. And now – and I think people ask ‘how come?’ – probably there was something in the atmosphere, in the air, that made the bumblebee fly. Now something must have changed in the air, and we know what after the financial crisis. The bumblebee would have to graduate to a real bee. And that’s what it’s doing” (Draghi 2012)

What Draghi is saying is that, according to our economic models, the Eurozone should not have flown. Entomologists (those who study insects) of old with more simple models came to the conclusion that bumble bees should not be able to fly. Their reaction was to later rethink their models in light of irrefutable evidence. Yet, the economist’s instinct is to attempt to modify reality in order to fit a model that has been built on longstanding theory. Unfortunately, that very theory is itself based on shaky foundations.

Economic theory can mislead

Every student in economics is faced with the model of the isolated optimizing individual who makes his choices within the constraints imposed by the market. Somehow, the axioms of rationality imposed on this individual are not very convincing, particularly to first time students. But the student is told that the aim of the exercise is to show that there is an equilibrium, there can be prices that will clear all markets simultaneously. And, furthermore, the student is taught that such an equilibrium has desirable welfare properties. Importantly, the student is told that since the 1970s it has been known that whilst such a system of equilibrium prices may exist, we cannot show that the economy would ever reach an equilibrium nor that such an equilibrium is unique.

The student then moves on to macroeconomics and is told that the aggregate economy or market behaves just like the average individual she has just studied. She is not told that these general models in fact poorly reflect reality. For the macroeconomist, this is a boon since he can now analyze the aggregate allocations in an economy as though they were the result of the rational choices made by one individual. The student may find this even more difficult to swallow when she is aware that peoples’ preferences, choices and forecasts are often influenced by those of the other participants in the economy. Students take a long time to accept the idea that the economy’s choices can be assimilated to those of one individual.

A troubling choice for macroeconomists

Macroeconomists are faced with a stark choice: either move away from the idea that we can pursue our macroeconomic analysis whilst only making assumptions about isolated individuals, ignoring interaction; or avoid all the fundamental problems by assuming that the economy is always in equilibrium, forgetting about how it ever got there.

Exogenous shocks? Or a self-organizing system?

Macroeconomists therefore worry about something that seems, to the uninformed outsider, paradoxical. How does the economy experience fluctuations or cycles whilst remaining in equilibrium? The basic macroeconomic idea is, of course, that the economy is in a steady state and that it is hit from time to time by exogenous shocks. Yet, this is entirely at variance with the idea that economists may be dealing with a system which self organizes, experiencing sudden and large changes from time to time.

There are two reasons as to why the latter explanation is better than the former. First, it is very difficult to find significant events that we can point to in order to explain major turning points in the evolution of economies. Second, the idea that the economy is sailing on an equilibrium path but is from time to time buffeted by unexpected storms just does not pass what Bob Solow has called the ‘smell test’. To quote Willem Buiter (2009),

“Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive”

Some teaching suggestions

New thinking is imperative:

  • We should spend more time insisting on the importance of coordination as the main problem of modern economies rather than efficiency. Our insistence on the latter has diverted attention from the former.
  • We should cease to insist on the idea that the aggregation of the choices and actions of individuals who directly interact with each other can be captured by the idea of the aggregate acting as only one of these many individuals. The gap between micro- and macrobehavior is worrying.
  • We should recognize that some of the characteristics of aggregates are caused by aggregation itself. The continuous reaction of the aggregate may be the result of individuals making simple, binary discontinuous choices. For many phenomena, it is much more realistic to think of individuals as having thresholds - which cause them to react - rather than reacting in a smooth, gradual fashion to changes in their environment. Cournot had this idea, it is a pity that we have lost sight of it. Indeed, the aggregate itself may also have thresholds which cause it to react. When enough individuals make a particular choice, the whole of society may then move. When the number of individuals is smaller, there is no such movement. One has only to think of the results of voting.
  • All students should be obliged to collect their own data about some economic phenomenon at least once in their career. They will then get a feeling for the importance of institutions and of the interaction between agents and its consequences. Perhaps, best of all, this will restore their enthusiasm for economics!

Some use for traditional theory

Does this mean that we should cease to teach ‘standard’ economic theory to our students? Surely not. If we did so, these students would not be able to follow the current economic debates. As Max Planck has said, “Physics is not about discovering the natural laws that govern the universe, it is what physicists do”. For the moment, standard economics is what economists do. But we owe it to our students to point out difficulties with the structure and assumptions of our theory. Although we are still far from a paradigm shift, in the longer run the paradigm will inevitably change. We would all do well to remember that current economic thought will one day be taught as history of economic thought.

References

Buiter, W (2009), “The unfortunate uselessness of most ‘state of the art’ academic monetary economics”, Financial Times online, 3 March.
Coyle, D (2012) “What’s the use of economics? Introduction to the Vox debate”, VoxEu.org, 19 September.
Davies, H (2012), “Economics in Denial”, ProjectSyndicate.org, 22 August.
Solow, R (2006), “Reflections on the Survey” in Colander, D., The Making of an Economist. Princeton, Princeton University Press.

Friday, October 12, 2012

'Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero'

[This one is wonkish. It's (I think) one of the more important papers from the St. Louis Fed conference.]

One thing that doesn't get enough attention in DSGE models, at least in my opinion, is the constraints, implicit assumptions, etc. imposed when the theoretical model is log-linearized. This paper by Tony Braun and Yuichiro Waki helps to fill that void by comparing a theoretical true economy to its log-linearized counterpart, and showing that the results of the two models can be quite different when the economy is at the zero bound. For example, multipliers that are greater than two in the log-linearized version are smaller -- usually near one -- in the true model (thus, fiscal policy remains effective, but may need to be more aggressive than the log-linear model would imply). Other results change as well, and there are sign changes in some cases, leading the authors to conclude that "we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps."

Here's part of the introduction and the conclusion to the paper:

Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero, by Tony Braun and Yuichiro Waki: Abstract Does fiscal policy have qualitatively different effects on the economy in a liquidity trap? We analyze a nonlinear stochastic New Keynesian model and compare the true and log-linearized equilibria. Using the log-linearized equilibrium conditions the answer to the above question is yes. However, for the true nonlinear model the answer is no. For a broad range of empirically relevant parameterizations labor falls in response to a tax cut in the log-linearized economy but rises in the true economy. While the government purchase multiplier is above two in the log-linearized economy it is about one in the true economy.
1 Introduction The recent experiences of Japan, the United States, and Europe with zero/near-zero nominal interest rates have raised new questions about the conduct of monetary and fiscal policy in a liquidity trap. A large and growing body of new research has emerged that provides answers using New Keynesian (NK) frameworks that explicitly model the zero bound on the nominal interest rate. One conclusion that has emerged is that fiscal policy has different effects on the economy when the nominal interest rate is zero. Eggertsson (2011) finds that hours worked fall in response to a labor tax cut when the nominal interest rate is zero, a property that is referred to as the “paradox of toil,” and Christiano, Eichenbaum, and Rebelo (2011), Woodford (2011) and Erceg and Lindé (2010) find that the size of the government purchase multiplier is substantially larger than one when the nominal interest rate is zero.
These and other results ( see e.g. Del Negro, Eggertsson, Ferrero, and Kiyotaki (2010), Bodenstein, Erceg, and Guerrieri (2009), Eggertsson and Krugman (2010)) have been derived in setups that respect the nonlinearity in the Taylor rule but loglinearize the remaining equilibrium conditions about a steady state with a stable price level. Log-linearized NK models require large shocks to generate a binding zero lower bound for the nominal interest rate and the shocks must be even larger if these models are to reproduce the measured declines in output and inflation that occurred during the Great Depression or the Great Recession of 2007-2009.[1] Log-linearizations are local solutions that only work within a given radius of the point where the approximation is taken. Outside of this radius these solutions break down (See e.g. Den Haan and Rendahl (2009)). The objective of this paper is to document that such a breakdown can occur when analyzing the zero bound.
We study the properties of a nonlinear stochastic NK model when the nominal interest rate is constrained at its zero lower bound. Our tractable framework allows us to provide a partial analytic characterization of equilibrium and to numerically compute all equilibria when the zero interest state is persistent. There are no approximations needed when computing equilibria and our numerical solutions are accurate up to the precision of the computer. A comparison with the log-linearized equilibrium identifies a severe breakdown of the log-linearized approximate solution. This breakdown occurs when using parameterizations of the model that reproduce the U.S. Great Depression and the U.S. Great Recession.
Conditions for existence and uniqueness of equilibrium based on the log-linearized equilibrium conditions are incorrect and offer little or no guidance for existence and uniqueness of equilibrium in the true economy. The characterization of equilibrium is also incorrect.
These three unpleasant properties of the log-linearized solution have the implication that relying on it to make inferences about the properties of fiscal policy in a liquidity trap can be highly misleading. Empirically relevant parameterization/shock combinations that yield the paradox of toil in the log-linearized economy produce orthodox responses of hours worked in the true economy. The same parameterization/shock combinations that yield large government purchases multipliers in excess of two in the log-linearized economy, produce government purchase multipliers as low as 1.09 in the nonlinear economy. Indeed, we find that the most plausible parameterizations of the nonlinear model have the property that there is no paradox of toil and that the government purchase multiplier is close to one.
We make these points using a stochastic NK model that is similar to specifications considered in Eggertsson (2011) and Woodford (2011). The Taylor rule respects the zero lower bound of the nominal interest rate, and a preference discount factor shock that follows a two state Markov chain produces a state where the interest rate is zero. We assume Rotemberg (1996) price adjustment costs, instead of Calvo price setting. When log-linearized, this assumption is innocuous - the equilibrium conditions for our model are identical to those in Eggertsson (2011) and Woodford (2011), with a suitable choice of the price adjustment cost parameter. Moreover, the nonlinear economy doesn’t have any endogenous state variables, and the equilibrium conditions for hours and inflation can be reduced to two nonlinear equations in these two variables when the zero bound is binding.[2]
These two nonlinear equations are easy to solve and are the nonlinear analogues of what Eggertsson (2011) and Eggertsson and Krugman (2010) refer to as “aggregate demand” (AD) and “aggregate supply” (AS) schedules. This makes it possible for us to identify and relate the sources of the approximation errors associated with using log-linearizations to the shapes and slopes of these curves, and to also provide graphical intuition for the qualitative differences between the log-linear and nonlinear economies.
Our analysis proceeds in the following way. We first provide a complete characterization of the set of time invariant Markov zero bound equilibria in the log-linearized economy. Then we go on to characterize equilibrium of the nonlinear economy. Finally, we compare the two economies and document the nature and source of the breakdowns associated with using log-linearized equilibrium conditions. An important distinction between the nonlinear and log-linearized economy relates to the resource cost of price adjustment. This cost changes endogenously as inflation changes in the nonlinear model and modeling this cost has significant consequences for the model’s properties in the zero bound state. In the nonlinear model a labor tax cut can increase hours worked and decrease inflation when the interest rate is zero. No equilibrium of the log-linearized model has this property. We show that this and other differences in the properties of the two models is precisely due to the fact that the resource cost of price adjustment is absent from the resource constraint of the log-linearized model.[3] ...
...
5 Concluding remarks In this paper we have documented that it can be very misleading to rely on the log-linearized economy to make inferences about existence of an equilibrium, uniqueness of equilibrium or to characterize the local dynamics of equilibrium. We have illustrated that these problems arise in empirically relevant parameterizations of the model that have been chosen to match observations from the Great Depression and Great Recession.
We have also documented the response of the economy to fiscal shocks in calibrated versions of our nonlinear model. We found that the paradox of toil is not a robust property of the nonlinear model and that it is quantitatively small even when it occurs. Similarly, the evidence presented here suggests that the government purchase GDP multiplier is not much above one in our nonlinear economy.
Although we encountered situations where the log-linearized solution worked reasonably well and the model exhibited the paradox of toil and a government purchase multiplier above one, the magnitude of these effects was quantitatively small. This result was also very tenuous. There is no simple characterization of when the log-linearization works well. Breakdowns can occur in regions of the parameter space that are very close to ones where the log-linear solution works. In fact, it is hard to draw any conclusions about when one can safely rely on log-linearized solutions in this setting without also solving the nonlinear model. For these reasons we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps.
This raises a question. How should one proceed with solution and estimation of medium or large scale NK models with multiple shocks and endogenous state variables when considering episodes with zero nominal interest rates? One way forward is proposed in work by Adjemian and Juillard (2010) and Braun and Körber (2011). These papers solve NK models using extended path algorithms.
We conclude by briefly discussing some extensions of our analysis. In this paper we assumed that the discount factor shock followed a time-homogeneous two state Markov chain with no shock being the absorbing state. In our current work we relax this final assumption and consider general Markov switching stochastic equilibria in which there are repeated swings between episodes with a positive interest rate and zero interest rates. We are also interested in understanding the properties of optimal monetary policy in the nonlinear model. Eggertsson and Woodford (2003), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006), Nakov (2008), and Werning (2011) consider optimal monetary policy problems subject to a non-negativity constraint on the nominal interest rate, using implementability conditions derived from log-linearized equilibrium conditions. The results documented here suggest that the properties of an optimal monetary policy could be different if one uses the nonlinear implementability conditions instead.
[1] Eggertsson (2011) requires a 5.47% annualized shock to the preference discount factor in order to account for the large output and inflation declines that occurred in the Great Depression. Coenen, Orphanides, and Wieland (2004) estimate a NK model to U.S. data from 1980-1999 and find that only very large shocks produce a binding zero nominal interest rate.
[2] Under Calvo price setting, in the nonlinear economy a particular moment of the price distribution is an endogenous state variable and it is no longer possible to compute an exact solution to the equilibrium.
[3] This distinction between the log-linearized and nonlinear resource constraint is not specific to our model of adjustment costs but also arises under Calvo price adjustment (see e.g. Braun and Waki (2010)).

Thursday, October 04, 2012

'Economists Played a Special Role in Contributing to the Problem'

This is from Andrew Haldane, Executive Director, Financial Stability, Bank of England:

What have the economists ever done for us?, by Andrew G Haldane, Vox EU: There is a long list of culprits when it comes to assigning blame for the financial crisis. At least in this instance, failure has just as many parents as success. But among the guilty parties, economists played a special role in contributing to the problem. We are duty bound to be part of the solution (see Coyle 2012). Our role in the crisis was, in a nutshell, the result of succumbing to an intellectual virus which took hold of the body financial from the 1990s onwards.
One strain of this virus is an old one. Cycles in money and bank credit are familiar from centuries past. And yet, for perhaps a generation, the symptoms of this old virus were left untreated. That neglect allowed the infection to spread from the financial system to the real economy, with near-fatal consequences for both.
In many ways, this was an odd disease to have contracted. The symptoms should have been all too obvious from history. The interplay of bank money and credit and the wider economy has been pivotal to the mandate of central banks for centuries. For at least a century, that was recognized in the design of public policy frameworks. The management of bank money and credit was a clear public policy prerequisite for maintaining broader macroeconomic and social stability.
Two developments – one academic, one policy-related – appear to have been responsible for this surprising memory loss. The first was the emergence of micro-founded dynamic stochastic general equilibrium (DGSE) models in economics. Because these models were built on real-business-cycle foundations, financial factors (asset prices, money and credit) played distinctly second fiddle, if they played a role at all.
The second was an accompanying neglect for aggregate money and credit conditions in the construction of public policy frameworks. Inflation targeting assumed primacy as a monetary policy framework, with little role for commercial banks' balance sheets as either an end or an intermediate objective. And regulation of financial firms was in many cases taken out of the hands of central banks and delegated to separate supervisory agencies with an institution-specific, non-monetary focus.
Coincidentally or not, what happened next was extraordinary. Commercial banks' balance sheets grew by the largest amount in human history. For example, having flat-lined for a century, bank assets-to-GDP in the UK rose by an order of magnitude from 1970 onwards. A similar pattern was found in other advanced economies.
This balance sheet explosion was, in one sense, no one’s fault and no one’s responsibility. Not monetary policy authorities, whose focus was now inflation and whose models scarcely permitted bank balance sheets a walk-on role. And not financial regulators, whose focus was on the strength of individual financial institutions.
Yet this policy neglect has since shown itself to be far from benign. The lessons of financial history have been painfully re-taught since 2008. They need not be forgotten again. This has important implications for the economics profession and for the teaching of economics. For one, it underscores the importance of sub-disciplines such as economic and financial history. As Galbraith said, "There can be few fields of human endeavor in which history counts for so little as in the world of finance." Economics can ill afford to re-commit that crime.
Second, it underlines the importance of reinstating money, credit and banking in the core curriculum, as well as refocusing on models of the interplay between economic and financial systems. These are areas that also fell out of fashion during the pre-crisis boom.
Third, the crisis showed that institutions really matter, be it commercial banks or central banks, when making sense of crises, their genesis and aftermath. They too were conveniently, but irresponsibly, airbrushed out of workhorse models. They now needed to be repainted back in.
The second strain of intellectual virus is a new, more virulent one. This has been made dangerous by increased integration of markets of all types, economic, but especially financial and social. In a tightly woven financial and social web, the contagious consequences of a single event can thus bring the world to its knees. That was the Lehman Brothers story.
These cliff-edge dynamics in socioeconomic systems are becoming increasingly familiar. Social dynamics around the Arab Spring in many ways closely resembled financial system dynamics following the failure of Lehman Brothers four years ago. Both are complex, adaptive networks. When gripped by fear, such systems are known to behave in a highly non-linear fashion due to cascading actions and reactions among agents. These systems exhibit a robust yet fragile property: swan-like serenity one minute, riot-like calamity the next.
These dynamics do not emerge from most mainstream models of the financial system or real economy. The reason is simple. The majority of these models use the framework of a single representative agent (or a small number of them). That effectively neuters the possibility of complex actions and interactions between agents shaping system dynamics.
The financial system is an archetypical complex, adaptive socioeconomic system – and has become more so over time. In the early years of this century, financial chains lengthened dramatically, system-wide maturity mismatches widened alarmingly and intrafinancial system claims ballooned exponentially. The system became, in consequence, a hostage to its weakest link. When that broke, so too did the system as a whole. Communications networks and social media then propagated fear globally.
Conventional models, based on the representative agent and with expectations mimicking fundamentals, had no hope of capturing these system dynamics. They are fundamentally ill-suited to capturing today’s networked world, in which social media shape expectations, shape behavior and thus shape outcomes.
This calls for an intellectual reinvestment in models of heterogeneous, interacting agents, an investment likely to be every bit as great as the one that economists have made in DGSE models over the past 20 years. Agent-based modeling is one, but only one, such avenue. The construction and simulation of highly non-linear dynamics in systems of multiple equilibria represents unfamiliar territory for most economists. But this is not a journey into the unknown. Sociologists, physicists, ecologists, epidemiologists and anthropologists have for many years sought to understand just such systems. Following their footsteps will require a sense of academic adventure sadly absent in the pre-crisis period.

Wednesday, August 22, 2012

'Economics in Denial'

Howard Davies:

Economics in Denial, by Howard Davies, Commentary, Project Syndicate: In an exasperated outburst, just before he left the presidency of the European Central Bank, Jean-Claude Trichet complained that, “as a policymaker during the crisis, I found the available [economic and financial] models of limited help. In fact, I would go further: in the face of the crisis, we felt abandoned by conventional tools.” ... It was a ... serious indictment of the economics profession, not to mention all those extravagantly rewarded finance professors in business schools from Harvard to Hyderabad. ...
But it is not clear that a majority of the profession yet accepts [this]... The so-called “Chicago School” has mounted a robust defense of its rational expectations-based approach, rejecting the notion that a rethink is required. The Nobel laureate economist Robert Lucas has argued that the crisis was not predicted because economic theory predicts that such events cannot be predicted. So all is well. ...
We should not focus attention exclusively on economists, however. Arguably the elements of the conventional intellectual toolkit found most wanting are the capital asset pricing model and its close cousin, the efficient-market hypothesis. Yet their protagonists see no problems to address.
On the contrary, the University of Chicago’s Eugene Fama has described the notion that finance theory was at fault as “a fantasy,” and argues that “financial markets and financial institutions were casualties rather than causes of the recession.” And the efficient-market hypothesis that he championed cannot be blamed...
Fortunately, others in the profession ... have been chastened by the events of the last five years... They are working hard ... to develop new approaches...

There is resistance from the old guard, but I'm modestly optimistic. Some people are trying to ask, and answer, the right questions. However, it's a slow process.

Tuesday, July 31, 2012

New Old Keynesians

From the archives (September 2009), for no particular reason:

New Old Keynesians: There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.
If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.
But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by price sluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.
But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?
We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward to the IS-LM model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.
So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes it didn't, but that was no reason to ignore the answers when it did.
[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc. But as noted here, if you are going to take guidance from the older models it is essential that you understand their limitations -- these models should not be used without a thorough knowledge of the pitfalls involved and where they can and cannot be avoided -- the kind of knowledge someone like Paul Krugman surely has at hand.]

Saturday, July 21, 2012

Plosser: Macro Models and Monetary Policy Analysis

Charles Plosser, President of the Philadelphia Fed, explains the limitations of DSGE models, particularly models of the New Keynesian variety used for policy analysis. (He doesn't reject the DSGE methodology, and that will disappoint some, but he does raise good questions about this class of models. I believe the macroeconomics literature is going to fully explore these micro-founded, forward looking, optimizing models whether critics like it or not, so we may as well get on with it. The questions raised below help to clarify the direction the research should take, and in the end the models will either prove worthy, or be cast aside. In the meantime, I hope the macroeconomics profession will become more open to alternative ideas/models than it has been in the recent past, but I doubt the humility needed for that to happen has taken hold despite all the problems with these models that were exposed by the housing and financial crises.):

Macro Models and Monetary Policy Analysis, by Charles I. Plosser, President and Chief Executive Officer, Federal Reserve Bank of Philadelphia, Bundesbank — Federal Reserve Bank of Philadelphia Spring 2012 Research Conference, Eltville, Germany, May 25, 2012: Introduction ...After spending over 30 years in academia, I have served the last six years as a policymaker trying to apply what economics has taught me. Needless to say, I picked a challenging time to undertake such an endeavor. But I have learned that, despite the advances in our understanding of economics, a number of issues remain unresolved in the context of modern macro models and their use for policy analysis. In my remarks today, I will touch on some issues facing policymakers that I believe state-of-the-art macro models would do well to confront. Before continuing, I should note that I speak for myself and not the Federal Reserve System or my colleagues on the Federal Open Market Committee.

More than 40 years ago, the rational expectations revolution in macroeconomics helped to shape a consensus among economists that only unanticipated shifts in monetary policy can have real effects. According to this consensus, only monetary surprises affect the real economy in the short to medium run because consumers, workers, employers, and investors cannot respond quickly enough to offset the effect of these policy actions on consumption, the labor market, and investment.1

But over the years this consensus view on the transmission mechanism of monetary policy to the real economy has evolved. The current generation of macro models, referred to as New Keynesian DSGE models,2 rely on real and nominal frictions to transmit not only unanticipated but also systematic changes in monetary policy to the economy. Unexpected monetary shocks drive movements in output, consumption, investment, hours worked, and employment in DSGE models. However, in contrast to the earlier literature, it is the relevance of systematic movements in monetary policy that makes these models of so much interest for policy analysis. Systematic policy changes are represented in these models by Taylor-type rules, in which the policy interest rate responds to changes in inflation and a measure of real activity, such as output growth. Armed with forecasts of inflation and output growth, a central bank can assess the impact that different policy rate paths may have on the economy. The ability to do this type of policy analysis helps explain the widespread use of New Keynesian DSGE models at central banks around the world.

These modern macro models stress the importance of credibility and systematic policy, as well as forward-looking rational agents, in the determination of economic outcomes. In doing so, they offer guidance to policymakers about how to structure policies that will improve the policy framework and, therefore, economic performance. Nonetheless, I think there is room for improving the models and the advice they deliver on policy options. Before discussing several of these improvements, it is important to appreciate the “rules of the game” of the New Keynesian DSGE framework.

The New Keynesian Framework

New Keynesian DSGE models are the latest update to real business cycle, or RBC, theory. Given my own research in this area, it probably does not surprise many of you that I find the RBC paradigm a useful and valuable platform on which to build our macroeconomic models.3 One goal of real business cycle theory is to study the predictions of dynamic general equilibrium models, in which optimizing and forward-looking consumers, workers, employers, and investors are endowed with rational expectations. A shortcoming many see in the simple real business cycle model is its difficulty in internally generating persistent changes in output and employment from a transitory or temporary external shock to, say, productivity.4 The recognition of this problem has inspired variations on the simple model, of which the New Keynesian revival is an example.

The approach taken in these models is to incorporate a structure of real and nominal frictions into the real business cycle framework. These frictions are placed in DSGE models, in part, to make real economic activity respond to anticipated and unanticipated changes in monetary policy, at least, in the short to medium run. The real frictions that drive internal propagation of monetary policy often include habit formation in consumption, that is, how past consumption influences current consumption; the costs of capital used in production; and the resources expended by adding new investment to the existing stock of capital. New Keynesian DSGE models also include the costs faced by monopolistic firms and households when setting their prices and nominal wages. A nominal friction often assumed in Keynesian DSGE models is that firms and households have to wait a fixed interval of time before they can reset their prices and wages in a forward-looking, optimal manner. A rule of the game in these models is that the interactions of these nominal frictions with real frictions give rise to persistent monetary nonneutralities over the business cycle.5 It is this monetary transmission mechanism that makes the New Keynesian DSGE models attractive to central banks.

An assumption of these models is that the structure of these real and nominal frictions, which transmit changes in monetary policy to the real economy, well-approximate the true underlying rigidities of the actual economy and are not affected by changes in monetary policy. This assumption implies that the frictions faced by consumers, workers, employers, and investors cannot be eliminated at any price they might be willing to pay. Although the actors in actual economies probably recognize the incentives they have to innovate — think of the strategy to use continuous pricing on line for many goods and services — or to seek insurance to minimize the costs of the frictions, these actions and markets are ruled out by the “rules of the game” of New Keynesian DSGE modeling.

Another important rule of the game prescribes that monetary policy is represented by an interest rate or Taylor-type reaction function that policymakers are committed to follow and that everyone believes will, in fact, be followed. This ingredient of New Keynesian DSGE models most often commits a central bank to increase its policy rate when inflation or output rises above the target set by the central bank. And this commitment is assumed to be fully credible according to the rules of the game of New Keynesian DSGE models. Policy changes are then evaluated as deviations from the invariant policy rule to which policymakers are credibly committed.

The Lucas Critique Revisited with Respect to New Keynesian DSGE Models

In my view, the current rules of the game of New Keynesian DSGE models run afoul of the Lucas critique — a seminal work for my generation of macroeconomists and for each generation since.6 The Lucas critique teaches us that to do policy analysis correctly, we must understand the relationship between economic outcomes and the beliefs of economic agents about the policy regime. Equally important is the Lucas critique’s warning against using models whose structure changes with the alternative government policies under consideration.7 Policy changes are almost never once and for all. So, many economists would argue that an economic model that maps states of the world to outcomes but that does not model how policy shifts across alternative regimes would fail the Lucas critique because it would not be policy invariant.8 Instead, economists could better judge the effects of competing policy options by building models that account for the way in which policymakers switch between alternative policy regimes as economic circumstances change.9

For example, I have always been uncomfortable with the New Keynesian model’s assumption that wage and price setters have market power but, at the same time, are unable or unwilling to change prices in response to anticipated and systematic shifts in monetary policy. This suggests that the deep structure of nominal frictions in New Keynesian DSGE models should do more than measure the length of time that firms and households wait for a chance to reset their prices and wages.10 Moreover, it raises questions about the mechanism by which monetary policy shocks are transmitted to the real economy in these models.

I might also note here that the evidence from micro data on price behavior is not particularly consistent with the implications of the usual staggered price-setting assumptions in these models.11 When the real and nominal frictions of New Keynesian models do not reflect the incentives faced by economic actors in actual economies, these models violate the Lucas critique’s policy invariance dictum, and thus, the policy advice these models offer must be interpreted with caution.

From a policy perspective, the assumption that a central bank can always and everywhere credibly commit to its policy rule is, I believe, also questionable. While it is desirable for policymakers to do so — and in practice, I seek ways to make policy more systematic and more credible — commitment is a luxury few central bankers ever actually have, and fewer still faithfully follow.

During the 1980s and 1990s, it was quite common to hear in workshops and seminars the criticism that a model didn’t satisfy the Lucas critique. I thought this was often a cheap shot because almost no model satisfactorily dealt with the issue. And during a period when the policy regime was apparently fairly stable — which many argued it mostly was during those years — the failure to satisfy the Lucas critique seemed somewhat less troublesome. However, in my view, throughout the crisis of the last few years and its aftermath, the Lucas critique has become decidedly more relevant. Policy actions have become increasingly discretionary. Moreover, the financial crisis and associated policy responses have left many central banks operating with their policy rate near the zero lower bound; this means that they are no longer following a systematic rule, if they ever were. Given that central bankers are, in fact, acting in a discretionary manner, whether it is because they are at the zero bound or because they cannot or will not commit, how are we to interpret policy advice coming from models that assume full commitment to a systematic rule? I think this point is driven home by noting that a number of central banks have been openly discussing different regimes, from price-level targeting to nominal GDP targeting. In such an environment where policymakers actively debate alternative regimes, how confident can we be about the policy advice that follows from models in which that is never contemplated?

Some Directions for Furthering the Research Agenda

While I have been pointing out some limitations of DSGE models, I would like to end my remarks with six suggestions I believe would be fruitful for the research agenda.

First, I believe we should work to give the real and nominal frictions that underpin the monetary propagation mechanism of New Keynesian DSGE models deeper and more empirically supported structural foundations. There is already much work being done on this in the areas of search models applied to labor markets and studies of the behavior of prices at the firm level. Many of you at this conference have made significant contributions to this literature.

Second, on the policy dimension, the impact of the zero lower bound on central bank policy rates remains, as a central banker once said, a conundrum. The zero lower bound introduces nonlinearity into the analysis of monetary policy that macroeconomists and policymakers still do not fully understand. New Keynesian models have made some progress in solving this problem,12 but a complete understanding of the zero bound conundrum involves recasting a New Keynesian DSGE model to show how it can provide an economically meaningful story of the set of shocks, financial markets, and frictions that explain the financial crisis, the resulting recession, and the weak recovery that has followed. This might be asking a lot, but a good challenge usually produces extraordinary research.

Third, we must make progress in our analysis of credibility and commitment. The New Keynesian framework mostly assumes that policymakers are fully credible in their commitment to a specified policy rule. If that is not the case in practice, how do policymakers assess the policy advice these models deliver? Policy at the zero lower bound is a leading example of this issue. According to the New Keynesian model, zero lower bound policies rely on policymakers guiding the public’s expectations of when an initial interest rate increase will occur in the future. If the credibility of this forward guidance is questioned, evaluation of the zero lower bound policy has to account for the public's beliefs that commitment to this policy is incomplete. I have found that policymakers like to presume that their policy actions are completely credible and then engage in decisions accordingly. Yet if that presumption is wrong, those policies will not have the desired or predicted outcomes. Is there a way to design and estimate policy responses in such a world? Can reputational models be adapted for this purpose?

Fourth, and related, macroeconomists need to consider how to integrate the institutional design of central banks into our macroeconomic models. Different designs permit different degrees of discretion for a central bank. For example, responsibility for setting monetary policy is often delegated by an elected legislature to an independent central bank. However, the mandates given to central banks differ across countries. The Fed is often said to have a dual mandate; some banks have a hierarchal mandate; and others have a single mandate. Yet economists endow their New Keynesian DSGE models with strikingly uniform Taylor-type rules, always assuming complete credibility. Policy analysis might be improved by considering the institutional design of central banks and how it relates to the ability to commit and the specification of the Taylor-type rules that go into New Keynesian models. Central banks with different levels of discretion will respond differently to the same set of shocks.

Let me offer a slightly different take on this issue. Policymakers are not Ramsey social planners. They are individuals who respond to incentives like every other actor in the economy. Those incentives are often shaped by the nature of the institutions in which they operate. Yet the models we use often ignore both the institutional environment and the rational behavior of policymakers. The models often ask policymakers to undertake actions that run counter to the incentives they face. How should economists then think about the policy advice their models offer and the outcomes they should expect? How should we think about the design of our institutions? This is not an unexplored arena, but if we are to take the policy guidance from our models seriously, we must think harder about such issues in the context of our models.

This leads to my fifth suggestion. Monetary theory has given a great deal of thought to rules and credibility in the design of monetary policy, but the recent crisis suggests that we need to think more about the design of lender-of-last-resort policy and the institutional mechanism for its execution. Whether to act as the lender of last resort is discretionary, but does it have to be so? Are there ways to make it more systematic ex ante? If so, how?

My sixth and final thought concerns moral hazard, which is addressed in only a handful of models. Moral hazard looms large when one thinks about lender-of-last-resort activities. But it is also a factor when monetary policy uses discretion to deviate from its policy rule. If the central bank has credibility that it will return to the rule once it has deviated, this may not be much of a problem. On the other hand, a central bank with less credibility, or no credibility, may run the risk of inducing excessive risk-taking. An example of this might be the so-called “Greenspan put,” in which the markets perceived that when asset prices fell, the Fed would respond by reducing interest rates. Do monetary policy actions that appear to react to the stock market induce moral hazard and excessive risk-taking? Does having lender-of-last-resort powers influence the central bank’s monetary policy decisions, especially at moments when it is not clear whether the economy is in the midst of a financial crisis? Does the combination of lender-of-last-resort responsibilities with discretionary monetary policy create moral hazard perils for a central bank, encouraging it to take riskier actions? I do not know the answer to these questions, but addressing them and the other challenges I have mentioned with New Keynesian DSGE models should prove useful for evaluating the merits of different institutional designs for central banks.

Conclusion

The financial crisis and recession have raised new challenges for policymakers and researchers. The degree to which policy actions, for better or worse, have become increasingly discretionary should give us pause as we try to evaluate policy choices in the context of the workhorse New Keynesian framework, especially given its assumption of credibly committed policymakers. Indeed, the Lucas critique would seem to take on new relevance in this post-crisis world. Central banks need to ask if discretionary policies can create incentives that fundamentally change the actions and expectations of consumers, workers, firms, and investors. Characterizing policy in this way also raises issues of whether the institutional design of central banks matters for evaluating monetary policy. I hope my comments today encourage you, as well as the wider community of macroeconomists, to pursue these research questions that are relevant to our efforts to improve our policy choices.

References and Footnotes

Continue reading "Plosser: Macro Models and Monetary Policy Analysis" »

Wednesday, July 11, 2012

Arrogance and Self-Satisfaction among Macroeconomists???

Simon Wren-Lewis:

Crisis, what crisis? Arrogance and self-satisfaction among macroeconomists, by Simon Wren-Lewis: My recent post on economics teaching has clearly upset a number of bloggers. There I argued that the recent crisis has not led to a fundamental rethink of macroeconomics. Mainstream macroeconomics has not decided that the Great Recession implies that some chunk of what we used to teach is clearly wrong and should be jettisoned as a result. To some that seems self-satisfied, arrogant and profoundly wrong. ...
Let me be absolutely clear that I am not saying that macroeconomics has nothing to learn from the financial crisis. What I am suggesting is that when those lessons have been learnt, the basics of the macroeconomics we teach will still be there. For example, it may be that we need to endogenise the difference between the interest rate set by monetary policy and the interest rate actually paid by firms and consumers, relating it to asset prices that move with the cycle. But if that is the case, this will build on our current theories of the business cycle. Concepts like aggregate demand, and within the mainstream, the natural rate, will not disappear. We clearly need to take default risk more seriously, and this may lead to more use of models with multiple equilibria (as suggested by Chatelain and Ralf, for example). However, this must surely use the intertemporal optimising framework that is the heart of modern macro.
Why do I want to say this? Because what we already have in macro remains important, valid and useful. What I see happening today is a struggle between those who want to use what we have, and those that want to deny its applicability to the current crisis. What we already have was used (imperfectly, of course) when the financial crisis hit, and analysis clearly suggests this helped mitigate the recession. Since 2010 these positive responses have been reversed, with policymakers around the world using ideas that contradict basic macro theory, like expansionary austerity. In addition, monetary policy makers appear to be misunderstanding ideas that are part of that theory, like credibility. In this context, saying that macro is all wrong and we need to start again is not helpful.
I also think there is a danger in the idea that the financial crisis might have been avoided if only we had better technical tools at our disposal. (I should add that this is not a mistake most heterodox economists would make.) ... The financial crisis itself is not a deeply mysterious event. Look now at the data on leverage that we had at the time, but too few people looked at before the crisis, and the immediate reaction has to be that this cannot go on. So the interesting question for me is how those that did look at this data managed to convince themselves that, to use the title from Reinhart and Rogoff’s book, this time was different.
One answer was that they were convinced by economic theory that turned out to be wrong. But it was not traditional macro theory – it was theories from financial economics. And I’m sure many financial economists would argue that those theories were misapplied. Like confusing new techniques for handling idiosyncratic risk with the problem of systemic risk, for example. Believing that evidence of arbitrage also meant that fundamentals were correctly perceived. In retrospect, we can see why those ideas were wrong using the economics toolkit we already have. So why was that not recognised at the time? I think the key to answering this does not lie in any exciting new technique from physics or elsewhere, but in political science.
To understand why regulators and others missed the crisis, I think we need to recognise the political environment at the time, which includes the influence of the financial sector itself. And I fear that the academic sector was not exactly innocent in this either. A simplistic take on economic theory (mostly micro theory rather than macro) became an excuse for rent seeking. The really big question of the day is not what is wrong with macro, but why has the financial sector grown so rapidly over the last decade or so. Did innovation and deregulation in that sector add to social welfare, or make it easier for that sector to extract surplus from the rest of the economy? And why are there so few economists trying to answer that question?

I have so many posts on the state of modern macro that it's hard to know where to begin, but here's a pretty good summary of my views on this particular topic:

I agree that the current macroeconomic models are unsatisfactory. The question is whether they can be fixed, or if it will be necessary to abandon them altogether. I am okay with seeing if they can be fixed before moving on. It's a step that's necessary in any case. People will resist moving on until they know this framework is a dead end, so the sooner we come to a conclusion about that, the better.
As just one example, modern macroeconomic models do not generally connect the real and the financial sectors. That is, in standard versions of the modern model linkages between the disintegration of financial intermediation and the real economy are missing. Since these linkages provide an important transmission mechanism whereby shocks in the financial sector can affect the real economy, and these are absent from models such as Eggertsson and Woodford, how much credence should I give the results? Even the financial accelerator models (which were largely abandoned because they did not appear to be empirically powerful, and hence were not part of the standard model) do not fully link these sectors in a satisfactory way, yet these connections are crucial in understanding why the crash caused such large economic effects, and how policy can be used to offset them. [e.g. see Woodford's comments, "recent events have made it clear that financial issues need to be integrated much more thoroughly into the basic framework for macroeconomic analysis with which students are provided."]
There are many technical difficulties with connecting the real and the financial sectors. Again, to highlight just one aspect of a much, much larger list of issues that will need to be addressed, modern models assume a representative agent. This assumption overcomes difficult problems associated with aggregating individual agents into macroeconomic aggregates. When this assumption is dropped it becomes very difficult to maintain adequate microeconomic foundations for macroeconomic models (setting aside the debate over the importance of doing this). But representative (single) agent models don't work very well as models of financial markets. Identical agents with identical information and identical outlooks have no motivation to trade financial assets (I sell because I think the price is going down, you buy because you think it's going up; with identical forecasts, the motivation to trade disappears). There needs to be some type of heterogeneity in the model, even if just over information sets, and that causes the technical difficulties associated with aggregation. However, with that said, there have already been important inroads into constructing these models (e.g. see Rajiv Sethi's discussion of John Geanakoplos' Leverage Cycles). So while I'm pessimistic, it's possible this and other problems will be overcome.
But there's no reason to wait until we know for sure if the current framework can be salvaged before starting the attempt to build a better model within an entirely different framework. Both can go on at the same time. What I hope will happen is that some macroeconomists will show more humility they've they've shown to date. That they will finally accept that the present model has large shortcomings that will need to be overcome before it will be as useful as we'd like. I hope that they will admit that it's not at all clear that we can fix the model's problems, and realize that some people have good reason to investigate alternatives to the standard model. The advancement of economics is best served when alternatives are developed and issued as challenges to the dominant theoretical framework, and there's no reason to deride those who choose to do this important work.
So, in answer to those who objected to my defending modern macro, you are partly right. I do think the tools and techniques macroeconomists use have value, and that the standard macro model in use today represents progress. But I also think the standard macro model used for policy analysis, the New Keynesian model, is unsatisfactory in many ways and I'm not sure it can be fixed. Maybe it can, but that's not at all clear to me. In any case, in my opinion the people who have strong, knee-jerk reactions whenever someone challenges the standard model in use today are the ones standing in the way of progress. It's fine to respond academically, a contest between the old and the new is exactly what we need to have, but the debate needs to be over ideas rather than an attack on the people issuing the challenges.

This post of an email from Mark Gertler in July 2009 argues that modern macro has been mis-characterized:

The current crisis has naturally led to scrutiny of the economics profession. The intensity of this scrutiny ratcheted up a notch with the Economist’s interesting cover story this week on the state of academic economics.
I think some of the criticism has been fair. The Great Moderation gave many in the profession the false sense that we had handled the problem of the business cycle as well as we could. Traditional applied macroeconomic research on booms and busts and macroeconomic policy fell into something of a second class status within the field in favor of more exotic topics.
At the same time, from the discussion thus far, I don’t think the public is getting the full picture of what has been going on in the profession. From my vantage, there has been lots of high quality “middle ground” modern macroeconomic research that has been relevant to understanding and addressing the current crisis.
Here I think, though, that both the mainstream media and the blogosphere have been confusing a failure to anticipate the crisis with a failure to have the research available to comprehend it. Predicting the crisis would have required foreseeing the risks posed by the shadow banking system, which were missed not only by academic economists, but by just about everyone else on the planet (including the ratings agencies!).
But once the crisis hit, broadly speaking, policy-makers at the Federal Reserve made use of academic research on financial crises to help diagnose the situation and design the policy response. Research on monetary and fiscal policy when the nominal interest is at the zero lower bound has also been relevant. Quantitative macro models that incorporate financial factors, which existed well before the crisis, are rapidly being updated in light of new insights from the unfolding of recent events. Work on fiscal policy, which admittedly had been somewhat dormant, is now proceeding at a rapid pace.
Bottom line: As happened in both the wake of the Great Depression and the Great Stagflation, economic research is responding. In this case, the time lag will be much shorter given the existing base of work to build on. Revealed preference confirms that we still have something useful to offer: Demand for our services by the ultimate consumers of modern applied macro research – policy makers and staff at central banks – seems to be higher than ever.
Mark Gertler,
Henry and Lucy Moses Professor of Economics
New York University
[I ... also posted a link to his Mini-Course, "Incorporating Financial Factors Within Macroeconomic Modelling and Policy Analysis"... This course looks at recent work on integrating financial factors into macro modeling, and is a partial rebuttal to the assertion above that New Keynesian models do not have mechanisms built into them that can explain the financial crisis. ...]

Again, it wasn't the tools and techniques we use, we were asking the wrong questions. As I've argued many times, we were trying to explain normal times, the Great Moderation. Many (e.g. Lucas) thought the problem of depressions due to, say, a breakdown in the financial sector had been solved, so why waste time on those questions? Stabilization policy was passé, and we should focus on growth instead. So, I would agree with Simon Wren-Lewis that "we need to recognise the political environment at the time." But as I argued in The Economist, we also have to think about the sociology within the profession that worked against the pursuit of these ideas.

Perhaps Ricardo Cabellero says it better, so let me turn it over to him. From a post in late 2010:

Caballero says "we should be in “broad-exploration” mode." I can hardly disagree since that's what I meant when I said "While I think we should see if the current models and tools can be amended appropriately to capture financial crises such as the one we just had, I am not as sure as [Bernanke] is that this will be successful and I'd like to see [more] openness within the profession to a simultaneous investigation of alternatives."

Here's a bit more from the introduction to the paper:

The recent financial crisis has damaged the reputation of macroeconomics, largely for its inability to predict the impending financial and economic crisis. To be honest, this inability to predict does not concern me much. It is almost tautological that severe crises are essentially unpredictable, for otherwise they would not cause such a high degree of distress... What does concern me of my discipline, however, is that its current core—by which I mainly mean the so-called dynamic stochastic general equilibrium approach has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one. ...
To be fair to our field, an enormous amount of work at the intersection of macroeconomics and corporate finance has been chasing many of the issues that played a central role during the current crisis, including liquidity evaporation, collateral shortages, bubbles, crises, panics, fire sales, risk-shifting, contagion, and the like.1 However, much of this literature belongs to the periphery of macroeconomics rather than to its core. Is the solution then to replace the current core for the periphery? I am tempted—but I think this would address only some of our problems. The dynamic stochastic general equilibrium strategy is so attractive, and even plain addictive, because it allows one to generate impulse responses that can be fully described in terms of seemingly scientific statements. The model is an irresistible snake-charmer. In contrast, the periphery is not nearly as ambitious, and it provides mostly qualitative insights. So we are left with the tension between a type of answer to which we aspire but that has limited connection with reality (the core) and more sensible but incomplete answers (the periphery).
This distinction between core and periphery is not a matter of freshwater versus saltwater economics. Both the real business cycle approach and its New Keynesian counterpart belong to the core. ...
I cannot be sure that shifting resources from the current core to the periphery and focusing on the effects of (very) limited knowledge on our modeling strategy and on the actions of the economic agents we are supposed to model is the best next step. However, I am almost certain that if the goal of macroeconomics is to provide formal frameworks to address real economic problems rather than purely literature-driven ones, we better start trying something new rather soon. The alternative of segmenting, with academic macroeconomics playing its internal games and leaving the real world problems mostly to informal commentators and "policy" discussions, is not very attractive either, for the latter often suffer from an even deeper pretense-of-knowledge syndrome than do academic macroeconomists. ...

My main message is that yes, we need to push the DSGE structure as far as we can and see if it can be satisfactorily amended. Ask the right questions, and use the tools and techniques associated with modern macro to build the right models. But it's not at all clear that the DSGE methodology is up to the task, so let's not close our eyes -- or worse actively block -- the search for alternative theoretical structures.

Tuesday, July 03, 2012

Physicists in Finance Should Pay More Attention to Economists

New column:

Physicists Can Learn from Economists, by Mark Thoma: After attending last year’s Economics Nobel Laureates Meeting in Lindau, Germany, I was very critical of what I heard from the laureates at the meeting.  The conference is intended to bring graduate students together with the Nobel Prize winners to learn about fruitful areas for future research. Yet, with all the challenges the Great Recession posed for macroeconomic models, very little of the conference was devoted to anything related to the Great Recession. And when it did come up, the comments were “all over the map.” And some, such as Ed Prescott, were particularly appalling as they made very obvious political statements in the guise of economic analysis. I felt bad for the students who had come to the conference hoping to gain insight about where macroeconomics was headed in the future.
I am back at the meetings this year, but the topic is physics, not economics, and it’s pretty clear that most physicists think they have nothing to learn from lowly economists. That’s true even when they are working on problems in economics and finance.
But they do have something to learn. ...

Saturday, June 30, 2012

'Macroeconomics and the Centrist Dodge'

I think I've made this point repeatedly, though I tend to use the term ideological instead of political, but just in cast the message hasn't gotten through:

Macroeconomics and the Centrist Dodge, by Paul Krugman: Simon Wren-Lewis says something quite similar to my own view about the trouble with macroeconomics: it’s mostly political. And although Wren-Lewis bends over backwards to avoid saying it too bluntly, most – not all, but most – of the problem comes from the right. ...
By now, the centrist dodge ought to be familiar. A Very Serious, chin-stroking pundit argues that what we really need is a political leader willing to concede that while the economy needs short-run stimulus, we also need to address long-term deficits, and that addressing those long-term deficits will require both spending cuts and revenue increases. And then the pundit asserts that both parties are to blame for the absence of such leaders. What he absolutely won’t do is endanger his centrist credentials by admitting that the position he’s just outlined is exactly, exactly, the position of Barack Obama.
The macroeconomics equivalent looks like this: a concerned writer or speaker on economics bemoans the state of the field and argues that what we really need are macroeconomists who are willing to approach the subject with an open mind and change their views if the evidence doesn’t support their model. He or she concludes by scolding the macroeconomics profession in general, which is a nice safe thing to do – but requires deliberately ignoring the real nature of the problem.
For the fact is that it’s not hard to find open-minded macroeconomists willing to respond to the evidence. These days, they’re called Keynesians and/or saltwater macroeconomists. ...
Would Keynesians have been willing to change their views drastically if the experience of the global financial crisis had warranted such a change? I’d like to think so – but we’ll never know for sure, because the basic Keynesian view has in fact worked very well in the crisis.
But then there’s the other side – freshwater, equilibrium, more or less classical macro.
Recent events have been one empirical debacle after another for that view of the world – on interest rates, on inflation, on the effects of fiscal contraction. But the truth is that freshwater macro has been failing empirical tests for decades. Everywhere you turn there are anomalies that should have had that side of the profession questioning its premises, from the absence of the technology shocks that were supposed to drive business cycles, to the evident effectiveness of monetary policy, to the near-perfect correlation of nominal and real exchange rates.
But rather than questioning its premises, that side of the field essentially turned its back on evidence, calibrating its models rather than testing them, and refusing even to teach alternative views.
So there’s the trouble with macro: it’s basically political, and it’s mainly – not entirely, but mainly – coming from one side. Yet this truth is precisely what the critics won’t acknowledge, because that would endanger their comfortable position of scolding everyone equally. It is, in short, the centrist dodge carried over to conflict within economics.
Do we need better macroeconomics? Indeed we do. But we also need better critics, who are prepared to take the risk of actually taking sides for good economics and against dogmatism.

Before adding a few comments, I want to be careful to distinguish the "Keynesianism" discussed above from the New Keynesian model. I'll end up rejecting the standard NK model, but in doing so I am not rejecting Keynesian concepts. As Krugman summarizes, these are things like "the concept of the liquidity trap..., acceptance ... that wages are downwardly rigid – and hence that the natural rate hypothesis breaks down at low inflation.

Let me start by noting that one of the best examples of a  macroeconomic model being rejected that I know of is the New Classical model and its prediction that only unanticipated money matters for real variables such as employment and GDP. At first, Robert Barro and others thought the empirical evidence favored this model, but over time it became clear that both anticipated and unanticipated money matters. That is, the prediction was wrong and the model was rejected (it had other problems as well, e.g. explaining both the magnitude and duration of business cycles).

However, the response has been interesting, and it proceeds along the political lines discussed above. Some economists just can't accept that money might matter, and therefore that the government (through the Fed) has an important role to play in managing the economy. And unfortunately, they have acted more like lawyers than scientists in their attempts to discredit New Keynesian and other models that have this implication. After all, markets work, and they work through movements in prices, so a sticky price NK model must be wrong. QED.

Now, it turns out that the New Keynesian model probably is wrong, or at least incomplete, but that's a view based upon evidence rather than ideology. Prior to the crisis, I was a fan of the NK model. Despite what those who couldn't let go of the markets must work point of view argued, I believed this model was better than any other model we had at explaining macroeconomic data. But while the NK model did an adequate job of explaining aggregate fluctuations and how monetary policy will affect the economy in normal times with mild business cycle fluctuations, i.e. from the mid 1980s until recently, it did a downright lousy job of explaining the Great Recession. When it got pushed into new territory by the Great Recession, the Calvo type price stickiness driving fluctuations in the NK model had little to say about the problems we were having and how to fix them.

Thus, from my point of view the Great Recession rejected the standard version of the NK model. Perhaps the model can be fixed by tacking on a financial sector and allowing financial intermediation breakdowns to impact the real economy -- there are models along these lines that people are working to improve -- we will have to see about that. A more general NK model that has one type of fluctuation in normal times -- the standard price stickiness effects -- and occasional large fluctuations from endogenous credit market breakdowns might do the trick (there were models of this type prior to the recession, but they weren't the standard in the profession, and they weren't well-integrated into the general NK structure). So we may be able to find a more general version of the model that can capture both normal and abnormal times. But, then again, we may not and, as I've said many times, we need to encourage the exploration of alternative theoretical structures.

But no matter what happens, some economists just won't accept a model that implies the government can do good through either monetary of fiscal policy, and they work very hard to construct alternatives that don't allow for this. There is less resistance to monetary policy, the evidence is hard to deny so some of these economists will admit that monetary policy can affect the economy positively (so long as the Fed is an independent technocratic body). But fiscal policy is resisted no matter the theoretical and empirical evidence. They have their ideological/political views, and any model inconsistent with them must be wrong.

Update: Noah Smith responds to Paul Krugman here.

Friday, June 29, 2012

'Science' without Falsification

Bryan Caplan is tired of being sneered at by "high-status academic economists":

The Curious Ethos of the Academic/Appointee, by Bryan Caplan: High-status academic economists often look down on economists who engage in blogging and punditry. Their view: If you can't "definitively prove" your claims, you should remain silent. 

At the same time, though, high-status academic economists often receive top political appointments. Part of their job is to stand behind the administration's party line. They don't merely make claims they can't definitively prove; to keep their positions, appointees have to make claims they don't even believe! Yet high-status academic economists are proud to accept these jobs - and their colleagues admire them for doing so. ...

Noah Smith has something to say about "definitive proof":

"Science" without falsification is no science, by Noah Smith: Simon Wren-Lewis notes that although plenty of new macroeconomics has been added in response to the recent crisis/depression, nothing has been thrown out...

Four years after a huge deflationary shock with no apparent shock to technology, asset-pricing papers and labor search papers and international finance papers and even some business-cycle papers continue to use models in which business cycles are driven by technology shocks. No theory seems to have been thrown out. And these are young economists writing these papers, so it's not a generational effect. ...

If smart people don't agree, it may because they are waiting for new evidence or because they don't understand each other's math. But if enough time passes and people are still having the same arguments they had a hundred years ago - as is exactly the case in macro today - then we have to conclude that very little is being accomplished in the field. The creation of new theories does not represent scientific progress until it is matched by the rejection of failed alternative theories.

The root problem here is that macroeconomics seems to have no commonly agreed-upon criteria for falsification of hypotheses. Time-series data - in other words, watching history go by and trying to pick out recurring patterns - does not seem to be persuasive enough to kill any existing theory. Nobody seems to believe in cross-country regressions. And there are basically no macro experiments. ...

So as things stand, macro is mostly a "science" without falsification. In other words, it is barely a science at all. Microeconomists know this. The educated public knows this. And that is why the prestige of the macro field is falling. The solution is for macroeconomists to A) admit their ignorance more often (see this Mankiw article and this Cochrane article for good examples of how to do this), and B) search for better ways to falsify macro theories in a convincing way.

I have a slightly different take on this. From a column last summer:

What Caused the Financial Crisis? Don’t Ask An Economist, by Mark Thoma: What caused the financial crisis that is still reverberating through the global economy? Last week’s 4th Nobel Laureate Meeting in Lindau, Germany – a meeting that brings Nobel laureates in economics together with several hundred young economists from all over the world – illustrates how little agreement there is on the answer to this important question.
Surprisingly, the financial crisis did not receive much attention at the conference. Many of the sessions on macroeconomics and finance didn’t mention it at all, and when it was finally discussed, the reasons cited for the financial meltdown were all over the map.
It was the banks, the Fed, too much regulation, too little regulation, Fannie and Freddie, moral hazard from too-big-to-fail banks, bad and intentionally misleading accounting, irrational exuberance, faulty models, and the ratings agencies. In addition, factors I view as important contributors to the crisis, such as the conditions that allowed troublesome runs on the shadow banking system after regulators let Lehman fail, were hardly mentioned.
Macroeconomic models have not fared well in recent years – the models didn’t predict the financial crisis and gave little guidance to policymakers, and I was anxious to hear the laureates discuss what macroeconomists need to do to fix them. So I found the lack of consensus on what caused the crisis distressing. If the very best economists in the profession cannot come to anything close to agreement about why the crisis happened almost four years after the recession began, how can we possibly address the problems? ...
How can some of the best economists in the profession come to such different conclusions? A big part of the problem is that macroeconomists have not settled on a single model of the economy, and the various models often deliver very different, contradictory advice on how to solve economic problems. The basic problem is that economics is not an experimental science. We use historical data rather than experimental data, and it’s possible to construct more than one model that explains the historical data equally well. Time and more data may allow us to settle on a particular model someday – as new data arrives it may favor one model over the other – but as long as this problem is present, macroeconomists will continue to hold opposing views and give conflicting advice.
This problem is not just of concern to macroeconomists; it has contributed to the dysfunction we are seeing in Washington as well. When Republicans need to find support for policies such as deregulation, they can enlist prominent economists – Nobel laureates perhaps – to back them up. Similarly, when Democrats need support for proposals to increase regulation, they can also count noted economists in their camp. If economists were largely unified, it would be harder for differences in Congress to persist, but unfortunately such unanimity is not generally present.
This divide in the profession also increases the possibility that the public will be sold false or misleading ideas intended to promote an ideological or political agenda.  If the experts disagree, how is the public supposed to know what to believe? They often don’t have the expertise to analyze policy initiatives on their own, so they rely on experts to help them. But when the experts disagree at such a fundamental level, the public can no longer trust what it hears, and that leaves it vulnerable to people peddling all sorts of crazy ideas.
When the recession began, I had high hopes that it would help us to sort between competing macroeconomic models. As noted above, it's difficult to choose one model over another because the models do equally well at explaining the past. But this recession is so unlike any event for which there is existing data that it pushes the models into new territory that tests their explanatory power (macroeconomic data does not exist prior to 1947 in most cases, so it does not include the Great Depression). But, disappointingly, even though I believe the data point clearly toward models that emphasize the demand side rather than the supply side as the source of our problems, the crisis has not propelled us toward a particular class of models as would be expected in a data-driven, scientific discipline. Instead, the two sides have dug in their heels and the differences – many of which have been aired in public – have become larger and more contentious than ever.

Finally, on the usefulness of microeconomic models for macroeconomists -- what is known as microfoundations -- see here: The Macroeconomic Foundations of Microeconomics.

Update: See here too: Why Economists Can't Agree, another column of mine from the past, and also Simon Wren-Lewis: What microeconomists think about macroeconomics.

Thursday, June 14, 2012

"Inflation Targeting is Dead"

Jeff Frankel takes up the question of inflation targeting versus nominal GDP targeting, and concludes that nominal GDP targeting has many advantages:

Nominal GDP Targeting Could Take the Place of Inflation Targeting, by Jeff Frankel: In my preceding blogpost, I argued that the developments of the last five years have sharply pointed up the limitations of Inflation Targeting (IT)...   But if IT is dead, what is to take its place as an intermediate target that central banks can use to anchor expectations?
The leading candidate to take the position of preferred nominal anchor is probably Nominal GDP Targeting.  It has gained popularity rather suddenly, over the last year.  But the idea is not new.  It had been a candidate to succeed money targeting in the 1980s, because it did not share the latter’s vulnerability to shifts in money demand.  Under certain conditions, it dominates not only a money target (due to velocity shocks) but also an exchange rate target  (if exchange rate shocks are large) and a price level target (if supply shocks are large).   First proposed by James Meade (1978), it attracted the interest in the 1980s of such eminent economists as Jim Tobin (1983), Charlie Bean (1983), Bob Gordon (1985), Ken West (1986), Martin Feldstein & Jim Stock (1994), Bob Hall & Greg Mankiw (1994), Ben McCallum (1987, 1999), and others.
Nominal GDP targeting was not adopted by any country in the 1980s.  Amazingly, the founders of the European Central Bank in the 1990s never even considered it on their list of possible anchors for euro monetary policy.  ...
But now nominal GDP targeting is back, thanks to enthusiastic blogging by Scott Sumner (at Money Illusion), Lars Christensen (at Market Monetarist), David Beckworth (at Macromarket Musings), Marcus Nunes (at Historinhas) and others.  Indeed, the Economist has held up the successful revival of this idea as an example of the benefits to society of the blogosphere.  Economists at Goldman Sachs have also come out in favor. 
Fans of nominal GDP targeting point out that it would not, like Inflation Targeting, have the problem of excessive tightening in response to adverse supply shocks. ...
In the long term, the advantage of a regime that targets nominal GDP is that it is more robust with respect to shocks than the competitors (gold standard, money target, exchange rate target, or CPI target).   But why has it suddenly gained popularity at this point in history...?  Nominal GDP targeting might also have another advantage in the current unfortunate economic situation that afflicts much of the world:  Its proponents see it as a way of achieving a monetary expansion that is much-needed at the current juncture.
Monetary easing in advanced countries since 2008, though strong, has not been strong enough to bring unemployment down rapidly nor to restore output to potential.  It is hard to get the real interest rate down when the nominal interest rate is already close to zero. This has led some, such as Olivier Blanchard and Paul Krugman, to recommend that central banks announce a higher inflation target: 4 or 5 per cent.  ...  But most economists, and an even higher percentage of central bankers, are loath to give up the anchoring of expected inflation at 2 per cent which they fought so long and hard to achieve in the 1980s and 1990s.  Of course one could declare that the shift from a 2 % target to 4 % would be temporary.  But it is hard to deny that this would damage the long-run credibility of the sacrosanct 2% number.   An attraction of nominal GDP targeting is that one could set a target for nominal GDP that constituted 4 or 5% increase over the coming year - which for a country teetering on the fence between recovery and recession would in effect supply as much monetary ease as a 4% inflation target - and yet one would not be giving up the hard-won emphasis on 2% inflation as the long-run anchor.
Thus nominal GDP targeting could help address our current problems as well as a durable monetary regime for the future.

It's hard to figure out how to fix the world if you don't have a reliable model that can explain what went wrong. The optimal money rule in a model depends upon the the way in which changes in monetary policy are transmitted to the real economy. Is it because of price rigidities? Wage rigidities? Information problems? Credit frictions and rationing? The best response to a negative shock to the economy varies depending upon what type of model the investigator is using.

Thus, for the moment we need robust rules. Inflation targeting works well in models with Calvo type price-rigidities, and a Taylor type rule often emerges from models in this general class, but is this the most robust rule in the face of model uncertainty? We don't know the true model of the macroeconomy, that ought to be clear at this point. Does inflation targeting work well when the underlying problem is a breakdown in financial intermediation or other big problems in the financial sector? I'm not at all convinced that it does - some of the best remedies in this case involve abandoning a strict adherence to an inflation target in the short-run.

So, in the best of all worlds I'd prefer to have a model of the economy that works, find the optimal policy rule for that model, and then execute it. In the world we live in, I want robust rules -- rules that work well in a variety of models and in the face of a variety of different types of shocks (or at least recognize that the rule has to change when the source of the problem switches from, say, price rigidities to a breakdown in financial intermediation). One message that comes out of the description of NGDP targeting above is that this approach does appear to be more robust than inflation targeting. It's not always better, in some models a standard Taylor type rule is the best that can be done. But it's becoming harder and harder to believe that the Great Recession can be adequately described by models of this type, and hence hard to believe that we are well served by policy rules that assume price rigidities are the main source of economic fluctuations.

Sunday, May 20, 2012

Did Samuelson and Solow Really Claim that the Phillips Curve was a Structural Relationship?

Like Robert Waldmann, I have always taught that the Phillips curve was initially promoted as a permanent tradeoff between inflation and unemployment. It was thought to be a menu of choices that allowed most any unemployment rate to be achieved so long as we were willing to accept the required inflation rate (a look at scatterplots from the UK and the US made it appear that the relationship was stable).

However, the story goes, Milton Friedman argued this was incorrect in his 1968 presidential address to the AEA. Estimates of the Phillips curve that produced stable looking relationships were based upon data from time periods when inflation expectations were stable and unchanging. Friedman warned that if policymakers tried to exploit this relationship and inflation expectations changed, the Phillips curve would shift in a way that would give policymakers the inflation they were after, but the unemployment rate would be unchanged. There would be costs (higher inflation), but not benefits (lower unemployment). When subsequent data appeared to validate Friedman's prediction, the New Classical, rational expectations, microfoundations view of the world began to gain credibility over the old Keynesian model (though the Keynesians eventually emerged with a New Keynesian model that has microfoundations, rational expectations, etc., and overcomes some of the problems with the New Classical model).

Robert Waldmann argues that the premise of this story -- that Samuelson and Solow thought the Phillips curve represented a permanent, exploitable tradeoff between inflation and unemployment -- is wrong:

The Short and Long-Run Phillips Curves: Did Samuelson and Solow claim that the Phillips Curve was a structural relationship showing a permanent tradeoff between inflation and unemployment? James Forder says no.

Paul Krugman, John Quiggin and others (including me) have argued that the one success of the critics of old Keynesian economics is the prediction that high inflation would become persistent and lead to stagflation. The old Keynesian error was to assume that the reduced form Phillips curve was a structural equation -- an economic law not a coincidence.

Quiggin and many others including me have noted that Keynes did not make this old Keynesian error... The old Keynesian error, if it occurred, was made later. I have claimed (in a lecture to surprised students) that it was made by Samuelson and Solow. Was it ?

This is an important question in the history of economic thought, because the alleged error serves as a demonstration of the necessity of basing macroeconomics on microeconomic foundations. For a decade or two (roughly 1980 through roughly 1990 something) it was widely accepted that, to avoid such errors, macroeconomists had to assume that agents have rational expectations even though we don't.

The pattern of a gross error by two economists with impressive track records and an important success based on an approach which has had difficultly forecasting or even dealing with real events ever since made me suspect that the actual claims of Samuelson and Solow have been distorted by their critics. To be frank. this guess is also based on a strong sense that the approach of Friedman and Lucas to rhetoric and debate is more brilliant than fair.

I am very lazy, so I have been planning to Google some for months. I finally did. ... I googled samuelson solow phillips curve

The third hit is the 2010 paper by Forder which discusses Samuelson and Solow (1960) (which I have never read). ... Forder quotes p 189
'What is most interesting is the strong suggestion that the relation, such as it is, has shifted upward slightly but noticeably in the forties and fifties'
So in the paper which allegedly claimed that the Phillips curve is stable, Solow and Samuelson said it had shifted up. Rather sooner than Friedman and Phelps no ?

So how has it become an accepted fact that Samuelson and Solow said the Phillips curve was stable ? This fact is held to be vitally centrally important to the debate about macroeconomic methodology and it is obviously not a fact at all. How can it be that a claim about what was written in one short clear paper is so central to the debate and that no one checks it ?

They did caption a figure with a Phillips curve "a menu of policy choices" but (OK this is a paraphrase not a quote)
After this they emphasized – again – that these 'guesses' related only to the 'next few years', and suggested that a low-demand policy might either improve the tradeoff by affecting expectations, or worsen it by generating greater structural unemployment. Then, considering the even longer run, they suggest that a low-demand policy might improve the efficiency of allocation and thereby speed growth, or, rather more graphically, that the result might be that it 'produced class warfare and social conflict and depress the level of research and technical progress' with the result that the rate of growth would fall.
So, finally after months of procrastinating, I spent a few minutes (at home without access to JStor) checking the claim that is central to the debate on macroeconomic methodology and found a very convincing argument that it is nonsense.

If that were possible, this experience would lower my opinion of macroeconomists (as always Robert Waldmann explicitly included).

Sunday, April 29, 2012

More INET Videos: Sessions on Complexity and Fiscal Policy

These videos are from the recent INET conference in Berlin:

Taking Stock of Complexity Economics: Which Problems Does It Illuminate?

Moderator

  • Thomas Homer-Dixon, Director, Waterloo Institute for Complexity and Innovation, University of Waterloo [On Farmer Video]

Speakers

Does the Effectiveness of Fiscal Stimulus Depend on the Context? Balance Sheet Overhangs, Open Economy Leakages, and Idle Resources

Moderator

  • Robin Wells, former Research Professor of Economics at Princeton University [On Corsetti Video]

Speakers

Saturday, April 28, 2012

"A Case for Balanced-Budget Stimulus"

People often object to the idea of a multiplier because it comes from the old Keynesian model. Real macroeconomists, we are told, use DSGE models. But using a DSGE model doesn't matter, the result is essentially the same:

A case for balanced-budget stimulus, by Pontus Rendahl, Vox EU: ...there is little, if any, support in the current macroeconomic literature for the view that expansionary fiscal policy must come at the price of ramping up debt. In fact,... a ‘balanced-budget stimulus’ can set the economy on a steeper recovery path...
[W]hile Ricardian equivalence might have put a nail in the coffin of the Keynesian multiplier, it has certainly not pre-empted the underlying idea: that an increase in government spending may provoke a kickback in output many times the amount initially spent. Indeed, a body of recent research suggests that the fiscal multiplier may be very large, independently of the foresightedness of consumers (Christiano et al 2011, Eggertson 2010). And in a recent study of mine (Rendahl 2012), I identify three crucial conditions under which the fiscal multiplier can easily exceed 1 irrespective of the mode of financing. These conditions, I argue, are met in the current economic situation.
Condition 1. The economy is in a liquidity trap … When interest rates are near, or at, zero, cash and bonds are considered perfect substitutes. ...
Under these peculiar circumstances the laws of macroeconomics change. A dollar spent by the government is no longer a dollar less spent elsewhere. Instead, it’s a dollar less kept in the mattress. And the logic underpinning Say’s law – the idea that the supply of one commodity must add to the immediate demand for another – is broken. ...
Condition 2. … with high unemployment …
So while a dollar spent by the government is not a dollar less spent elsewhere, it is not immediate, nor obvious, whether this implies that government spending will raise output. The second criterion therefore concerns the degree of slack in the economy.
If unemployment is close to, or at, its natural rate, an increase in spending is unlikely to translate to a substantial rise in output. Labor is costly and firms may find it difficult to recruit the workforce needed to expand production. An increase in public demand may just raise prices and therefore offset any spending plans by the private sector.
But at a high rate of unemployment, the story is likely to be different. The large pool of idle workers facilitates recruitment, and firms may cheaply expand business. An increase in public demand may plausibly give rise to an immediate increase in production, with negligible effects on prices. Crowding-out is, under these circumstances, not an imminent threat.
Combining the ideas emerging from Conditions 1 and 2 implies that the fiscal multiplier – irrespective of the source of financing – may be close to 1 (cf Haavelmo 1945).
Condition 3. … which is persistent
But if unemployment is persistent, these ideas take yet another turn. A tax-financed rise in government spending raises output, and lowers the unemployment rate both in the present and in the future. As a consequence, the increase in public demand steepens the entire path of recovery, and the future appears less disconcerting. With Ricardian or forward-looking consumers, a brighter outlook provokes a rise in contemporaneous private demand, and output takes yet another leap. Thus, with persistent unemployment, a tax-financed increase in government purchases sets off a snowballing motion in which spending begets spending.
Where does this process stop? In a stylised framework in which there are no capacity constraints and unemployment displays (pure) hysteresis, I show that the fiscal multiplier is equal to the inverse of the elasticity of intertemporal substitution, a parameter commonly estimated to be around 0.5 or lower. Under such conditions, the fiscal multiplier is therefore likely to lie around 2 or thereabout.
Collecting arguments
To provide more solid grounds to these arguments, I construct a simple DSGE model with a frictional labour market.1 A crisis is triggered by an unanticipated (and pessimistic) news shock regarding future labour productivity. As forward-looking agents desire to smooth consumption over time, such a shock encourages agents to save rather than to spend, and the economy falls into a liquidity trap. In similarity to the aforementioned virtuous cycle, a vicious cycle emerges in which thrift reinforces thrift, and unemployment rates are sent soaring. ...
There are three important messages [from the work]:
  • First, for positive or small negative values of the news shock, the multiplier is zero. The reason is straightforward: With only moderately pessimistic news, the nominal interest rate aptly adjusts to avert a possible liquidity trap, and a dollar spent by the government is simply a dollar less spent by someone else.
  • Second, however, once the news is ominous enough, the economy falls into a liquidity trap. The multiplier takes a discrete jump up, and public spending unambiguously raises output. Yet, in a moderate crisis with an unemployment rate of 7% or less, private consumption is at least partly crowded-out.
  • Lastly, however, in a more severe recession with an unemployment rate of around 8% or more, the multiplier rises to, and plateaus at, around 1.5. Government spending now raises both output and private consumption, and unambiguously improves welfare...

As evidence that theoretical models -- the DSGE models used in modern macroeconomics -- support fiscal policy, and that the implied multipliers are relatively high in severe recessions, it becomes increasingly clear that much of the opposition to fiscal policy is ideological.

Monday, April 16, 2012

"Forecasting the Great Recession: DSGE vs. Blue Chip"

There was more interest in the post on the forecasting ability of DSGE models than I expected, so let me follow up with a post that comes from the NY Fed's Liberty Street Economics blog (this is relatively technical material). The post from Marco Del Negro, Daniel Herbst, and Frank Schorfheide argues that the forecasting ability of DSGE models depends upon "what information you feed into your model: Feed in the right information, and even a dingy DSGE model may not do so poorly at forecasting the recession." However, I don't think this overturns the claim made by Volker Wieland and Maik Wolters in the Vox EU piece linked above that "Both model forecasts and professional forecasts failed to predict the financial crisis. At the current state of knowledge about macroeconomics and the limitations to use all this knowledge in simplified models, large recessions might just be difficult to forecast," but "from the first quarter of 2009 onwards the model-based forecasts perform quite well in predicting the recovery of the US economy." That is, the models do not do very well at forecasting turning points, but once the turning points are known the models do a bit better:

Forecasting the Great Recession: DSGE vs. Blue Chip, by Marco Del Negro, Daniel Herbst, and Frank Schorfheide, Liberty Street: Dynamic stochastic general equilibrium (DSGE) models have been trashed, bashed, and abused during the Great Recession and after. One of the many reasons for the bashing was the models’ alleged inability to forecast the recession itself. Oddly enough, there’s little evidence on the forecasting performance of DSGE models during this turbulent period. In the paper “DSGE Model-Based Forecasting,” prepared for Elsevier’s Handbook of Economic Forecasting, two of us (Del Negro and Schorfheide), with the help of the third (Herbst), provide some of this evidence. This post shares some of our results.
We find that it really matters what information you feed into your model: Feed in the right information, and even a dingy DSGE model may not do so poorly at forecasting the recession. We also compare how the models perform relative to the “Blue Chip Economic Consensus” forecasts. The answer is: About the same, if not better, in fall 2007, in summer 2008, before the Lehman crisis, and at the beginning of 2009–provided one incorporates up-to-date financial data into the DSGE model. (By the way, if you don’t know what a DSGE model is, check out Wikipedia or this primer.)
The chart below shows DSGE model and Blue Chip forecasts for output growth obtained at three junctures of the crisis (the dates coincide with Blue Chip forecast releases): October 10, 2007, right after the turmoil in the financial markets had begun in August of that year; July 10, 2008, not long before the default of Lehman Brothers; and January 10, 2009, at the apex of the crisis. Specifically, each panel shows the current real GDP growth vintage (solid black line), the DSGE model’s mean forecasts (red line), and bands of the forecast distribution (shaded blue areas; these are the 50, 60, 70, 80, and 90 percent bands for the forecast distribution, in decreasing shade), the Blue Chip forecasts (green diamonds), and the actual realizations according to the May 2011 vintage (dashed black line). All the numbers are in percent, quarter-over-quarter.

Dsge-forcasting-fullsize

When interpreting these results, bear in mind that the information available to the DSGE econometrician consists only of the data used for estimation that are available at the time of the forecasts (these data are real GDP growth; inflation; the federal funds rate; total hours worked; growth in real wages, investment, and consumption; and long-run inflation expectations–all at a quarterly frequency). This implies two things. First, the estimation is done in a “real-time” context (as opposed to using revised data–yes, the GDP growth and other data are revised all the time, so the numbers you read about in the paper often end up being quite different from the final numbers). Second, the DSGE econometrician has only lagged information on the state of the economy. For instance, on January 10, 2009, she would only have the information contained in 2008:Q3 data. The information set used by Blue Chip forecasters contains the same data, but also includes a plethora of current indicators on the quarter that just ended (namely, 2008:Q4), information from financial markets, and all the qualitative information available from the media, speeches by government officials, etc.
The chart shows the forecasts for three different DSGE specifications. We call the first one SWπ; this is essentially the popular Smets-Wouters model. The second is the Smets-Wouters model with financial frictions, as in Bernanke et al. and Christiano et al., which we call SWπ-FF. The other difference between this model and SWπ is the use of observations on the Baa-ten-year Treasury rate spread, which captures distress in financial markets.
The last specification is still the SWπ-FF, except that we use observations for the federal funds rate and spreads for the quarter that just ended and for which no National Income and Product Accounts data are yet available (for the January 10, 2009, forecasts, these would be the 2008:Q4 average fed funds rate and spreads). Of course, this information was also available to Blue Chip forecasters. We refer to this specification as SWπ-FF-Current.
The October 10, 2007, Blue Chip forecasts for output were relatively upbeat, at or above .5 percent quarter-over-quarter (that is, 2 percent annualized). The SWπ forecasts were more subdued: The model’s mean forecasts are for growth barely above zero in 2008:Q1, with some probability of negative growth in 2008–that is, a recession. The forecasts for the two SWπ-FF specifications were in line with those of the SWπ model, although a bit more subdued. SWπ-FF-Current in particular assigns a likelihood of negative growth that’s above 25 percent. These models capture the slowdown in the economy that occurred in late 2007 and early 2008, but of course miss the post-Lehman episode. The decline in real GDP that occurred in 2008:Q4 is far in the tails of the forecast distribution for the SWπ model, but less so for the two SWπ-FF specifications.
In July 2008, the Blue Chip forecast and the mean forecast for the SWπ model were roughly aligned. Both foresaw a weak economy–but not a recession–in 2008, and a rebound in 2009. The two SWπ-FF specifications were less sanguine: Their forecast for 2008 was only slightly more pessimistic than the Blue Chip’s for 2008; but, unlike the Blue Chip, these models did not foresee a strong rebound in the economy in 2009. Both models failed to grasp what was coming in 2008:Q4, but at least they put enough probability on negative outcomes that the economy’s strikingly negative growth rate in 2008:Q4 is almost within the 90 percent bands of their forecast distributions.
By January 2009, conditions had worsened dramatically: Lehman Brothers had filed for bankruptcy a few months earlier (September 15, 2008), stock prices had fallen, financial markets were in disarray, and various current indicators had provided evidence that real activity was tumbling. None of this information was available to the SWπ model, which on January 10, 2009, would have used data up to 2008:Q3. Not surprisingly, the model is, so to say, in “la-la land” concerning the path of the economy in 2008:Q4 and after. Unlike the SWπ model, the SWπ-FF specification does not forecast a rebound, but at the same time does not foresee the steep decline in growth that occurred in 2008:Q4. The SWπ-FF uses spreads as an observable, but since the Lehman bankruptcy occurred only later in Q3, it had minor effects on the average Baa-ten-year Treasury rate spread for the quarter, and therefore the SWπ-FF has little indirect information on the turmoil in the financial markets.
The forecasts of the SWπ-FF-Current specification, which uses 2008:Q4 observations on spreads and the fed funds rate, are a completely different story. The model produces about the same forecast as the Blue Chip for 2008:Q4. Considering that the agents in the laboratory DSGE economy had not seen Federal Reserve Chairman Bernanke and Treasury Secretary Paulson on TV painting a dramatically bleak picture of the U.S. economy–which the Blue Chip forecasters likely did see–we regard this as a small achievement. But the main lesson may be that structural models can actually produce decent forecasts as long as you’re using appropriate information.

Saturday, April 14, 2012

Is INET Evolving?

During last years' INET conference at Bretton Woods I wrote:

Re-Kindleberger: I've learned that new economic thinking means reading old books.

Okay, that's not quite fair, but one of the themes of the Institute for New Economic Thinking conference I'm at has been to reintroduce economic history into the undergraduate and graduate programs. I think that's a good idea, as I've said many times, and not just a course on the history of economic thought. There's also a lot we can learn from studying the economic history of the US and other countries. ...

Update: Brad DeLong adds:

Actually, it is not not quite fair, it is fully fair.

This year I'm trying to understand how the attempt to introduce "new economic thinking" into the profession has evolved over the last three years. One new innovation this year is to invite students to the conference, and as explained here the response was much larger than expected.

I think this is a step in the right direction. Change won't come from the older, established economists who comprise most of the audience -- gray hair is in excess supply here -- change will come from the younger generation. One of them will come up with a new idea, a new model -- something that pushes the established lines in a way that creates momentum as others join in to push the model forward. I don't mean that older economists who are here won't try to change the world, or that they can't provide the spark that generates the new idea. It's also entirely possible that an established economist will come up with the foresight needed to push the frontier.

But I believe that change will come from the young, not the old who are mostly set in their ways and mostly adhere to boundaries defined by the models they already know. You can teach an old dog new ticks, surely, but the established economists are mostly set in their ways and will continue to pursue the familiar and the safe. Starting over with a brand new research agenda when you are in your 40s or 50s is possible I suppose, and I'm sure there are examples of this, but for most it would be too hard and too risky.

Similar risks exist for the young. Setting out in a new direction is hazardous, and if it doesn't pan out tenure will not be granted. It is much easier to contribute to existing knowledge than to create brand new knowledge, and it's much easier to publish as well. So even for the young the established path is very attractive.

That's why bringing students here is important. All these names they've heard, some that they are in awe of -- Nobel prize winners and the like -- are here and they are sending the younger economists an important message. They are signaling that there are are established economists in the very best departments who play key editorial roles in important journals who are receptive to good ideas. When Joe Stiglitz, Amartya Sen, James Heckman and many more names like that stand up and endorse the push to think about economics in a new way, it could give a student with a new idea, and the understandable hesitation that comes with it, the confidence to carry it through. If it works out, there's a good chance some very well-known and respected economists will help to push the idea forward, or at the very least be open-minded, and that provides important motivation to those who might discover something new.

But we have to be careful too. If we push students to try new ideas rather than the established path, and the ideas go nowhere in the end that could do harm to the individual's career. So what we also need to do -- and I admit that I'm not quite sure how to do this -- is to teach the students, as best we can anyway, what a good idea looks like. What makes a new idea more likely to be successful? What makes it more likely to be received by important journals? How can a student know whether to push forward or to back off?

I think the answer is mentorship of the type that exists between a Ph.D. candidate and their advisor, at least a good one. Part of that process is to help the students ask the right questions about their research, how to find the potential holes and fill them, and so on. So all of us who are pushing the profession to investigate new ideas and new directions need to be willing to talk to students about their ideas, ask them the questions they ought to ask themselves, read preliminary drafts that come by email out of the blue, and help in other ways as we can. We need to provide guidance and at the same time not inhibit the search for new and better paths forward, a somewhat delicate task.

It would be better, of course, if older, established economists did this. They have tenure, and that protects them if things don't work out in the end. They won't be given a terminal year and shoved out the door. But, again, I just don't think that's where change will come from. Instead, it's up to the young. The best we can do is to provide guidance freely, encourage the good ideas and redirect the lesser ones, provide motivation, and to the extent possible shield them from those who are only out to protect their own traditional research from new ideas that challenge their research programs.

Finally, for the conference in the future, it would help if the students were better integrated into the general conference instead of being housed in a separate location, watching a live feed of the conference, and visiting for 10 or 15 minutes with the well-known economists who are willing to come over and visit. Allowing students to attend was a last minute innovation, and I'm told this was the best they could do under the circumstances, but hopefully this will change in the future.

Tuesday, April 10, 2012

The Forecasting Ability of Modern Macroeconomic Models

I think this mischaracterizes Paul Krugman's view, though he can certainly speak for himself. But the rest is interesting: Volker Wieland and Mail Wolters on the forecasting ability of modern macroeconomic models (this one should have a warning that it is "very wonkish"):

Macroeconomic model comparisons and forecast competitions, by Volker Wieland and Maik Wolters, Vox EU: The failure of economists to predict the Great Recession of 2008–09 has rightly come under attack. The areas receiving most criticism have been economic forecasting and macroeconomic modelling. Distinguished economists – among them Nobel Prize winner Paul Krugman – have blamed developments in macroeconomic modelling over the last 30 years and particularly the use of dynamic stochastic general equilibrium (DSGE) models for this failure.

Key policymakers take a more pragmatic view, namely that there is no alternative to the use of simplified models, but that the development of complementary tools to improve the robustness of policy decisions is required. For example, former ECB President Jean-Claude Trichet said in late 2010:

The key lesson I would draw from our experience is the danger of relying on a single tool, methodology or paradigm. Policymakers need to have input from various theoretical perspectives and from a range of empirical approaches... We do not need to throw out our DSGE and asset-pricing models: rather we need to develop complementary tools to improve the robustness of our overall framework (Trichet 2010).

Against this backdrop, we present  a new paper (Wieland et al 2012) in which we propose a comparative approach to macroeconomic policy analysis that is open to competing modelling paradigms. We have developed a database of macroeconomic models that enables a systematic comparative approach to macroeconomic modelling with the objective of identifying policy recommendations that are robust to model uncertainty. This comparative approach enables individual researchers to conduct model comparisons easily, frequently, at low cost, and on a large scale.

The macroeconomic model database is available to download from www.macromodelbase.com and includes over 50 models. We have included models that are used at policy institutions like the IMF, the ECB, the Fed, and in academia. The database includes models of the US economy, the Eurozone, and several multi-country models. Some of the models are fairly small and focus on explaining output, inflation, and interest-rate dynamics. Many others are of medium scale and cover many key macroeconomic aggregates.

This database can be used to compare the implications of specific economic policies across models, but it can also serve as a testing ground for new models. New modelling approaches may offer more sophisticated explanations of the sources of the financial crisis and carry the promise of improved forecasting performance. This promise should be put to a test rather than presumed (see Wieland and Wolters 2011 for details).

In recent years, researchers such as Smets and Wouters (2004), Adolfson et al (2007) and Edge et al (2010) have reported on the strong forecasting performance of DSGE models. However, the existing papers are based on samples with long periods of average volatility and therefore cannot address specifically how well DSGE model-based forecasts perform during recessions and recoveries. With this in mind, we analyse the forecasting performance of models and experts around the five most recent NBER-defined recessions. Turning points pose the greatest challenge for economic forecasters, are of most importance for policymakers, and can help us to understand current limitations of economic forecasting, especially with respect to the recent financial crisis.

We use two small micro-founded New Keynesian models, two medium-size state-of-the-art New Keynesian business-cycle models – often referred to as DSGE models – and for comparison purposes an earlier-generation New Keynesian model (also with rational expectations and nominal rigidities but less strict microeconomic foundations) and a Bayesian VAR model. For each forecast we re-estimate all five models using exactly the data as they were available for professional forecasters when they submitted their forecasts to the SPF. Using these historical data vintages is crucial to ensure comparability to historical forecasts by professionals. We compute successive quarter-by-quarters forecasts up to five quarters ahead for all models.

Predicting the recession of 2008–09

Figure 1 shows forecasts for annualised quarterly real output growth for the recent financial crisis. The black line shows real-time data until the forecast starting point and revised data afterwards. The grey lines show forecasts collected in the SPF and the green line shows their mean. Model forecasts are shown in red. While data for real GDP become available with a lag of one quarter, professional forecasters can use within-quarter information from data series with a higher frequency. In contrast the models can process only quarterly data. To put the models on an equal footing in terms of information with the forecasts of experts, we condition their forecasts on the mean estimate of the current state of the economy from the SPF.

Figure 1.

Q3

Q4

Notes: Solid black line shows annualised quarterly output growth (real-time data vintage until forecast starting point and revised data afterwards), grey lines show forecasts from the SPF, green line shows mean forecast from the SPF, red lines show model forecasts conditional on the mean nowcast from the SPF.

The forecasts shown in the left [top] graph start in the third quarter 2008 and have been computed before the collapse of Lehman brothers. It is apparent that all professional forecasters failed to foresee the downturn. The mean SPF forecast indicates a slowdown of growth in the fourth quarter of 2008 followed by a return to higher growth in the first quarter of 2009. The model-based forecasts would not have performed any better and predict even higher growth rates than most professional forecasters. The graph on the right [bottom] shows that in the fourth quarter of 2008, following the Lehman debacle, professional forecasters drastically revised their assessments of the current state of the economy downwards. Still, growth turned out to be even much lower than estimated. Professional forecasters as well as model forecasts wrongly predicted that the trough had already been reached. While the models predict positive growth rates one quarter ahead, some of the professional forecasters were somewhat more pessimistic. The model-based predictions and the professional forecasters are, however, far from predicting an extreme downturn of as much as 6% output growth.

Given this failure to predict the recession and its length and depth, the widespread criticism of the state of economic forecasting before and during the financial crisis applies to business forecasting experts as well as modern and older macroeconomic models. Professional forecasters, who are able to use information from hundreds of data series including information about financial market conditions and all kinds of different forecasting tools and thus have clear advantage over purely model-based forecasts, were not able to predict the Great Recession either. Thus, there is no reason to single out DSGE models, and favour more traditional Keynesian-style models that may still be more popular among business experts. In particular, Paul Krugman’s proposal to rely on such models for policy analysis in the financial crisis and disregard three decades of economic research is misplaced.

Is there any hope left for economic forecasting and the use of modern structural models in this endeavour?

Figure 2 shows professional and model-based forecasts starting in the first and the second quarter of 2009. Professional forecasters continued to revise their estimated nowcast downwards for the first quarter of 2009 and predict an increase of growth rates afterwards. Interestingly, from the first quarter of 2009 onwards the model-based forecasts perform quite well in predicting the recovery of the US economy. Three-quarters-ahead model-based forecasts dominate expert forecasts in several cases.

Figure 2.

Q1

Q2

Comparing the forecasting accuracy of professional and model-based forecasts

The model forecasts are on average less accurate than the mean SPF forecasts (see Wieland and Wolters 2011 for detailed results). Of course, taking the mean of all forecasts collected in the SPF can increase the forecasting accuracy compared to individual forecasts. Looking at individual forecasts from the SPF we observe that the precision of the different model forecasts is well in line with the precision range of forecasts from professionals.

Computing the mean forecast of all models we obtain a robust forecast that is close to the accuracy of the forecast from the best model. Conditioning the model forecasts on the nowcast of professional forecasters (reported in the paper) can further increase the accuracy of model-based forecasts. Overall, model-based forecasts still exhibit somewhat greater errors than expert forecasts, but this difference is surprisingly small considering that the models only take into account few economic variables and incorporate theoretical restrictions that are essential for evaluations of the impact of alternative policies but often considered a hindrance for effective forecasting.

Conclusion

Both model forecasts and professional forecasts failed to predict the financial crisis. At the current state of knowledge about macroeconomics and the limitations to use all this knowledge in simplified models, large recessions might just be difficult to forecast.

By comparing the forecasts from different models we can hedge against outliers and find predictions that are robust across several models. Our macroeconomic model database provides a testing ground for macroeconomists to compare new models to a large range of existing benchmarks. We thus provide the tools for a comparison with established benchmarks and current forecasting practice as documented in the SPF. It is important to base discussions about competing modelling approaches on a solid basis. In our research we show how such a comparison of different models can be pursued.

References

Continue reading "The Forecasting Ability of Modern Macroeconomic Models" »

Wednesday, April 04, 2012

New Classical, New Keynesian, and Real Business Cycle Models

[This is an edited version of something I've posted here in the past. I'm hoping others will be motivated to add to (or correct) this history.]

The term "New Classical economics" is often used as though it is one of the dominant models in macroeconomics, but the term has a very specific meaning and refers to a class of models that is no longer popular.

The New Classical model has four important elements, the assumption of rational expectations, the assumption of the natural rate hypothesis, the assumption of continuous market clearing, and an assumption that agents have imperfect information (imperfect information drives cycles in these models). The imperfect information assumption was quite clever in that it allowed proponents of this model to explain correlations between money and income without acknowledging that systematic, predictable monetary or fiscal policy would have any effect at all on real output and employment (put another way, only unexpected changes in monetary policy matter, expected changes are fully neutralized by private sector responses to the policy).

The New Classical model is important for the foundation it provided for later models, the movement in macroeconomics toward microeconomic foundations and the use of rational agents within macro models in particular, but the model itself could not simultaneously explain both the duration and magnitude of actual cycles. It also had difficulty explaining some key correlations among macroeconomic variables, and it was difficult to understand why a market for the absent information did not develop if the consequences of imperfect information were as large as the New Classical model implied. If you are at Chicago where these models were popularized, markets pop up as needed and the fact that there was no market to help agents avoid the confusion that drives the New Classical model was a strike against them. In addition, one of the model's key results that only unexpected changes in money can affect real variables did not hold up when taken to the data (though there are still a few die-hards on this). So the profession moved on.

The New Classical model had replaced the old Keynesian model after the old Keynesian models' shortcomings were blamed, at least in part, for the problems we had in the 1970s. The model was also abandoned for theoretical reasons that will be described in a moment.

But while the New Classical economists were having their day in the sun, the Keynesians were quietly working behind the scenes to fix the problems that caused the old Keynesian model to go out of favor (or not so quietly in a few cases). The old Keynesian model had a poor model of expectations. If expectations were considered at all, they were usually modeled as a naive adaptive process. In addition, it was not clear that the assumptions and relationships embedded within the old Keynesian model were consistent with optimizing behavior on behalf of households and firms. The New Keynesian model solved this by deriving macroeconomic relationships from microeconomic optimizing behavior, and by adopting the rational expectations framework. And the New Keynesians made one other important change. In order for systematic monetary policy (e.g. following a Taylor rule) to affect real variables such as output and employment, there must be some type of friction that prevents the economy from immediately moving to it long run equilibrium value. The friction in the New Classical model is informational, agents optimize given the information that they have, but because the information is imperfect the decisions they make take the economy away from its optimal long-run path. In the New Keynesian model the friction that gives monetary policy its power is sluggish movement of prices and wages (generally modeled through something called the Calvo pricing rule). This friction is somewhat controversial and the precise degree of price rigidity in the economy is the subject of intense research.

Many people who use the term New Classical -- a natural counterpart to the term New Keynesian -- seem to have in mind some version of a Real Business Cycle model where prices are, in fact, assumed to be fully flexible, agents are rational, all markets clear, policy is neutral, etc. In these models, actual output is always equal to potential (so there's no need for policy to do anything but maximize the growth of potential output, hence the supply-side orientation of advocates of this approach). Potential output moves over time in response to productivity and taste shocks, i.e. supply shocks, and that is the source of business cycles in thsi class of models. Demand shocks, which drive business cycles in New Keynesian models (as well as New Classical and Old Keynesian models) have little or no effect on real output and employment.

I was recently labeled as a "neoclassical" economist, so let me end by making it clear that not all of us believe that assuming fully flexible prices and continuous market clearing is the proper way to model the economy. Prior to the crisis I was an advocate of sticky price/sticky wage New Keynesian models, and quite resistant to pure Real Business Cycle approaches. But I am less of a fan of the New Keynesian model than I once was. I still think it's a good model to explain mild fluctuations of the type we had during the Great Moderation, and I still think the tools and techniques macroeconomists use, what is collectively labeled DSGE models, are the right way to go (though I would still like to see competing models challenge this view). But to be useful in a crisis like we just had the models have to be amended to better connect the real and financial sectors -- the connection between breakdowns in financial intermediation and the real economy needs to be improved -- and people are working hard to try to solve problems. Will they succeed? I certainly hope so.

Tuesday, April 03, 2012

Classes of Macro Models: What Counts as a DSGE Model?

In a tweet yesterday, one that Steve Keen posts here, I say (after noting that New Keynesian models are DSGE):

...RBC models can surely be categorized as DSGE

Keen then says:

Thoma believe[s] that DSGE models exclusively refer to NK models?

Uh, no. Just read what I said. I can't even call this a nice try. He also says:

I won’t accept Thoma’s excuse for your [Krugman's] behaviour—and nor do some of his own followers

First, it wasn't about Krugman at all. It was about Keen saying something wrong. Second, what do my followers really say (only one actually replied)? That Keen is wrong:

I agree with you. Two very different things

And:

Hence your point with Keen was a good one. I am constantly telling my students to be precise in the use of their "language"

If the only way to win an argument is to misrepresent people to this extent, it isn't a win. To use Keen's term, it's a gigantic "FAIL".

Now let me back up. My point was fairly simple, I said that I didn't consider New Classical models (i.e. information confusion models) DSGE models. Keen claimed they were in trying to defend himself against a charge made by Krugman, and I disagreed. To me, DSGE refers to something different (mainly, but not exclusively, the classes of NK and RBC models). But I also said that I could see how someone could make this argument, especially since NC models provided the intellectual foundation for modern macro models (RE, microfoundations, and equilibrium analysis mainly). Nevertheless, I don't think the term DSGE applies to NC models.

More to the point, however, Keen showed a lack of familiarity with modern models and I am still not sure that he knows the difference between NK, NC, RBC, and NM models, NC and RBC in particular (New Keynesian, New Classical, Real Business Cycle, and New Monetarist). The discussion by Keen just before his figure 3 -- the part that quotes Wikipedia for the authoratative answer as to whether NC models are DSGE -- doesn't even mention NC models. And Keen seems to imply that RBC=NC. That is, he thinks the statement in the Wikipedia quote that "Real business cycle (RBC) theory builds on the neoclassical growth model" means that the RBC model is the same as the NC model. That's wrong (the NC model generates non-neutralities and business cycles through information confusion, the RBC model is driven by productivity and preference shocks, information confusion is not present in these models -- also, the NC model has faded since its heyday a couple of decades or so ago and it's nota model that macroeconmists generally use today).

As I said in a series of tweets that set this off, I think it's possible to debate this. I don't think DSGE applies to NC, but there's at least an argument to be made. But a rational, reasoned argument is not the response I got (except for quotes from Wikipedia that don't actually support the argument). Instead, Keen misrepresened what I said to try to win the argument. I doubt it was intentional, I'll gi9ve the benefit of doubt -- it's more likely that the finer points were not understood.

Mostly I just want to corrct the record, not start a big fight. I don't like having people read that I think DSGE only applies to NK models when that is clearly not what I said. But let me at least try to end on a constructive note by posing a question: What defines a DSGE model? Is the NC model a DSGE model?

Update: Menzie Chinn emails what I think is the correct distinction. DSGE is a technique that can be applied to models of various types:

Hi Mark,
I have been out on the road and not following closely debates, but I am a bit mystified by the argument over DSGE definitions.
I think of New Keynesian, New Classical, and Real Business Cycle models as approaches. I think of DSGE primarily as a numerical methodology often involving calibration, but not always, but is solved out somehow, that implements one of those approaches.
One can have a New Keynesian model that is just for explication purposes – I think of Blanchard Kiyotaki. Or one can write out a dynamic stochastic general equilibrium model that incorporates New Keynesian attributes (monopolistic pricing, sticky prices, intertemporal optimization) or RBC attributes (flex price, big technology shocks with AR1s in the error terms of the shock processes). Wouldn’t that be a clearer way of differentiating?

Yes, but again, it would be very unusual to bring these techniques to the old-fashioned New Classical models. Thus, calling the NC model a DSGE model is quite a stretch.

Update: See Describing DSGEs too.

Sunday, April 01, 2012

Real-Time Economic Analysis

When Narayana Kocherlakota gave this speech based on this paper, a paper that uses a very simply model that is essentially an IS curve analysis, the economists who believe strongly in the science of monetary policy were appalled. How could Narayana have crossed over to the dark side?

I defended him, and it leads me into a broader discussion of the problems of doing what I've called "real-time economic analysis." Let me start with something I wrote about this awhile back:

Economic research is largely backward looking. After the fact – when all of the data has been collected and the revisions to the data are complete – economists examine data on, say, a financial crisis, and then figure out what caused the economy to become so sick. Once the cause has been determined, which may involve the construction of new theoretical frameworks, they tell us how to avoid it happening again, i.e. the particular set of policies that would have prevented or attenuated the damage.
But the internet and blogs are changing what we do, and to some extent we now act like emergency room physicians rather than pathologists who have the time to carefully examine data from tests, etc., determine what went wrong, and then recommend how to avoid problems in the future. When the financial crisis hit so unexpectedly, it was like a patient showed up at the emergency room very sick and in need of immediate diagnosis and care. We had to reach into our bag of macroeconomic models, choose the one that was correct for this question, and then use it to both diagnose the problems and prescribe policies to fix them. There was no time for a careful retrospective analysis that patiently determined the cause and then went to work on the potential policy responses.
That turned out to be much harder than expected. Our models and cures are not designed for that type of use. What data should we look at to make an immediate diagnosis? What tests should we conduct to give us data on what is wrong with the economy? If we aren’t sure what the cause is but immediate action is needed to save the economy from getting very sick, what is the equivalent of using broad spectrum antibiotics and other drugs to attack unknown problems? The development of blogs puts economists in real-time contact with the public, press, and policymakers, and when a crisis hits, traffic spikes as people come looking for answers.
Blogs are a start to solving the problem of real-time analysis, but we need to do a much better job than we are doing now at providing immediate answers when they are needed. If Lehman is failing and the financial sector is going down with it, or if Europe is in trouble, we need to know what to do right now. It won’t help to figure that out months from now and then publish the findings in a journal article. That means the discipline has to adjust from being backward looking pathologists with plenty of time to determine causes and cures to an emergency room mode where we can offer immediate advice. Blogs are an integral part of that process.

Policymakers at the Federal Reserve face this problem continuously. They must confront changes in the data that aren't always well understood in near real time, and make policy decisions every few weeks. If pre-existing models apply to the problem at hand, great, it can be used to guide policy decisions. But what should policymakers do when they are faced with an important decision about how to react to a large shock, and they reach into their black bag of models and none of them seem to fit?

One approach is what Paul Krugman does so well, something Narayana Kocherlakota seems to also be doing. Reach for simple models that get to the heart of the problem and hence offer guidance about what to do next. These models are not intended to explain the world generally, they are not "science" in that respect, they are intended to shine a light and provide guidance on a very narrow issue. It takes considerable skill to do this since, as I argued yesterday, it requires the practitioner to thoroughly understand the pitfalls of the simple approach, the ways in which it could go wrong.

So I think Narayana and others are correct to reach for simple models for guidance when they are faced with a decision that existing models do not address very well and there's not time to build a full-blown model of the problem.

My call to those who object that this approach is not "science," those who look down their noses at people like Krugman and Kocherlakota when they adopt this approach, is this. What is the scientific way to diagnose the economy is real-time, and confront unknown or uncertain pathologies? As I noted in another essay that discusses this problem, doctors have tests that can be done very quickly to provide a diagnosis, and they can they use broad-spectrum drugs and other approaches to try to heal the patient when the tests point to unknown causes.

What tests should we do that are quick and informative? There are lots of data, but what should we be examining to try to diagnose problems effectively before they get really bad? If we detect a problem, and don't fully understand it, what's the most robust way to attack it? What policies tend to work on a broad variety of underlying causes? Are there tests that can guide us to the correct robust policy?

My reaction when the crisis hit, and ever since, was to recommend a "portfolio of policies."  People who say only monetary policy will work, or only fiscal policy will work, blah, blah, blah are talking with more confidence than was justified by the models they are using. I decided early on that I really didn't know for sure which macroeconomic model was best. I had my preferences, strong preferences, but I couldn't say for sure that the model I preferred was correct. And it didn't really apply very well to the financial crisis in any case.

So, I thought, why not do what a doctor would do and give a broad spectrum drug that tends to work no matter the cause. There is the danger of side effects. If we aren't sure about which policy will work and we give full doses of both monetary and fiscal policy only to have them both work, the side effect of inflation could occur as the economy heals. But to me the side effect was far less worrisome than the disease itself, and in any case the side effect could be controlled by backing off the dosage once the patient was up and about once again. But what are the optimal weights for monetary and fiscal policy in such a situation? What else ought to be in the portfolio of policies (e.g. policies that can help even if the problem is structural rather than cyclical). What guidance can we give policymakers?

Those who believe in the science of monetary policy can sneer at the Krugman/Kocherlakota approach all they want, but there's a real (time) problem to be solved here and we could use their help. As I said above this is an area where the Fed has considerable experience, real-time analysis is a large part of what they do, and my push for Federal Reserve banks to interact more through blogs is partly for this reason. Hearing how Federal Reserve policymakers approach these problems would be useful.

But it would also be useful if the profession more generally would get aboard and help us understand how to better solve the difficult questions that arise when decisions must be made based upon only a partial understanding of the problem that is affecting the economy. In the long-run it's still important to build new, full blown models that can explain the problem and provide guidance. Macroeconomists are certainly doing that presently as they try to provide better models of how a breakdown in financial intermediation can impact the real economy than we had before, and so on. But work on how to better conduct real-time analysis is not getting as much attention, and that's something that needs to change.

Saturday, March 31, 2012

knzn is a Keynesian

knzn explains why he is a Keynesian:

Bullish It, by knzn: ...Smith’s blog leads me to think about the issue of macroeconomics as a field. It seems (especially from the comment thread) that the Old Keynesians and the New Monetarists are at each other’s throats (but, interestingly, the newly christened Market Monetarists – who have some claim to being the legitimate intellectual heirs of the Old Monetarists – basically seem to be on the same side as the Old Keynesians on the major issues here; and the New Keynesians can break for either side depending on whether they’re more Keynesian or more New). Obviously I’m more sympathetic to the Old Keynesians than the New Monetarists, otherwise maybe my pseudonym would be “dsge” instead of “knzn.”

Here’s my take: to begin with, economics is basically bulls**t. I mean, it’s necessary bulls**t, sometimes even useful bulls**t, but I’m extremely skeptical of people who think economics is a science or that it could be a science. We have to make policy decisions (and investment decisions and personal consumption decisions etc.), and we have to have some basis for making them. We could just use intuition, and we often do, but it’s helpful to use logical thought and empirical data also, and systematic study using fields like economics can help us to clarify our intuition, our logical arguments, and our interpretation of the empirical data. The same way that bulls**t discussions that don’t make any pretense at being science can help.

Economics is bulls**t because it relies on the premise that human beings behave in a systematic way, and they don’t. Once you have done enough research to convince yourself that they behave in a certain way, they will change and start behaving in another way. Particularly if they read your research and realize that you’re trying to manipulate them by expecting them to continue behaving the way they have. But even if they don’t read your research, they may change the way they behave just because the zeitgeist changes – cultural sunspots, if you will.

The last paragraph may vaguely remind you of the Lucas critique. Lucas basically said that macroeconomics (as it was being practiced at the time) was bulls**t, but he held out the hope that it could receive micro-foundations that wouldn’t be bulls**t. The problem with Lucas’ argument, though, is that microeconomics is also bulls**t. And Noah Smith, writing some 36 years after the Lucas critique and observing its unwholesome results, takes it one step further by saying, if I may paraphrase, “Yes, the microeconomics upon which modern macro has now been founded is indeed bulls**t, but if we do the micro right, then we can come up with non-bulls**t macro.”

Yeah, I doubt it. Maybe we can come up with slightly better macro than what we’ve got now, but the underlying micro is never going to be right. Experimental results involving human subjects are inevitably subject to the micro version of the Lucas critique: once the results become well-known, they become part of a new environment that determines a new set of behavior. And the zeitgeist will screw with them also. And so on. And in any case, even if the results were robust, I’m skeptical that we can really build them into a macro model or that it would be worth the trouble even if we could. Economics will always be bulls**t.

Now there’s a case for doing rigorous bulls**t, at least as a potentially useful exercise. That’s what I think DSGE modeling is: it’s a potentially useful exercise in rigorous bulls**t. And I don’t begrudge the work of people like Steve Williamson: I think there's some rigorous bulls**t there that may be worth talking about. But in general, when it comes to bulls**t, there is not a monotonic relationship between rigor and usefulness. And to put all your eggs in the rigorous bulls**t basket – not only that, but in one particular type of rigorous bulls**t basket, because rigor does not live by rational equilibrium alone – is something that not even Pudd’nhead Wilson could advocate.

So I’m going to stick with sloppy Old Keynesian models as my main mode of macroeconomic analysis. They’re bulls**t. They’re not rigorous bulls**t. But as bulls**t goes, they’re pretty useful. A lot more useful than unaided intuition. And they’re easy enough to understand that we can have a reasonable idea of where their unrealistic assumptions are likely to lead us astray. Of course all economic models have unrealistic assumptions, but hopefully our intuition allows us to correct for that condition when applying the models to the real world. If the model is too complicated for the typical economist to understand how the assumptions generate the conclusions, then the unrealism becomes a real problem.

When you need an answer fast to a question that the newer models don't address sufficiently, and there are many important questions that fall into this category, and when you don't have time to build a new model before needing to answer -- a situation policymakers face constantly -- then the Old Keynesian IS-LM/MP model can fill the void. It is very easy to use for most questions, in part because it has been explored so thoroughly over the decades. I suspect knzn faces this situation often in his job in finance, i.e. he needs an answer today, wants a model for guidance, doesn't have time to build a full blown dsge model, simulate it, etc. and the IS-LM/MP model can fill the void.

But if this approach is adopted, I think it's important not to forget the lessons of the more modern models. For example, the old and new IS curves differ by how they handle expectations of the future. The new model accounts for this, the old models don't. If changes in expectations about the future are arguably unimportant, and other important differences in the models are similarly unimportant, then the old IS-LM/MP model can provide a good approximation. But when these expectations are important, using the old models can cause you to miss important feedback effects from the expected future to the actual present.

The best of both worlds is, I think, better than either alone. The art is knowing what is "best" in each of the two models.

Thursday, March 29, 2012

"Macroeconomics with Heterogeneity: A Practical Guide"

This is a bit on the wonkish side, but since I've talked a lot about the difficulties that heterogeneous agents pose in macroeconomics, particularly for aggregation, I thought I should note this review of models with heterogeneous agents:

Macroeconomics with Heterogeneity: A Practical Guide, by Fatih Guvenen, Economic Quarterly, FRB Richmond: This article reviews macroeconomic models with heterogeneous households. A key question for the relevance of these models concerns the degree to which markets are complete. This is because the existence of complete markets imposes restrictions on (i) how much heterogeneity matters for aggregate phenomena and (ii) the types of cross-sectional distributions that can be obtained. The degree of market incompleteness, in turn, depends on two factors: (i) the richness of insurance opportunities provided by the economic environment and (ii) the nature and magnitude of idiosyncratic risks to be insured. First, I review a broad collection of empirical evidence—from econometric tests of "full insurance," to quantitative and empirical analyses of the permanent income ("self-insurance") model that examine how it fits the facts about life-cycle allocations, to studies that try to directly measure where economies place between these two benchmarks ("partial insurance"). The empirical evidence I survey reveals significant uncertainty in the profession regarding the magnitudes of idiosyncratic risks, as well as whether or not these risks have increased since the 1970s. An important difficulty stems from the fact that inequality often arises from a mixture of idiosyncratic risk and fixed (or predictable) heterogeneity, making the two challenging to disentangle. Second, I discuss applications of incomplete markets models to trends in wealth, consumption, and earnings inequality both over the life cycle and over time, where this challenge is evident. Third, I discuss "approximate" aggregation—the finding that some incomplete markets models generate aggregate implications very similar to representative-agent models. What approximate aggregation does and does not imply is illustrated through several examples. Finally, I discuss some computational issues relevant for solving and calibrating such models and I provide a simple yet fully parallelizable global optimization algorithm that can be used to calibrate heterogeneous agent models. View Full Article.

Friday, March 16, 2012

FRBSF: Structural and Cyclical Elements in Macroeconomics

I am here today:

Structural and Cyclical Elements in Macroeconomics
Federal Reserve Bank of San Francisco
Janet Yellen Conference Center, First Floor
March 16, 2012

AGENDA

Morning Session Chair: John Fernald, Federal Reserve Bank of San Francisco
8:10 A.M. Continental Breakfast
8:50 A.M. Welcoming Remarks: John Williams, Federal Reserve Bank of San Francisco
9:00 A.M. Jinzu Chen, International Monetary Fund, Prakash Kannan, International Monetary Fund, Prakash Loungani, International Monetary Fund   Bharat Trehan, Federal Reserve Bank of San Francisco New Evidence on Cyclical and Structural Sources of Unemployment (PDF - 462KB)  
Discussants: Steven Davis, University of Chicago Booth School of Business, Valerie Ramey, University of California, San Diego
10:20 A.M. Break
10:40 A.M. Robert Hall, Stanford University   Quantifying the Forces Leading to the Collapse of GDP after the Financial Crisis (PDF - 826KB)   Discussants: Antonella Trigari, Università Bocconi, Roger Farmer, University of California, Los Angeles
12:00 P.M. Lunch – Market Street Dining Room, Fourth Floor  
Afternoon Session Chair: Eric Swanson, Federal Reserve Bank of San Francisco
1:15 P.M. Charles Fleischman, Federal Reserve Board, John Roberts, Federal Reserve Board, From Many Series, One Cycle: Improved Estimates of the Business Cycle from a Multivariate Unobserved Components Model (PDF - 302KB)  
Discussants: Carlos Carvalho, Pontificia Universidade Católica, Rio de Janeiro, Ricardo Reis, Columbia University
2:35 P.M. Break
2:50 P.M. Christopher Carroll, Johns Hopkins University, Jiri Slacalek, European Central Bank, Martin Sommer, International Monetary Fund   Dissecting Saving Dynamics: Measuring Credit, Wealth, and Precautionary
Effects (PDF - 1.18MB)
 
Discussants: Karen Dynan, Brookings Institution, Gauti Eggertsson, Federal Reserve Bank of New York
4:10 P.M. Break
4:25 P.M. Andreas Fuster, Harvard University, Benjamin Hebert, Harvard University, David Laibson, Harvard University Natural Expectations, Macroeconomic Dynamics, and Asset Pricing (PDR - 663KB)  
Discussants: Yuriy Gorodnichenko, University of California, Berkeley, Stefan Nagel, Stanford Graduate School of Business
5:45 P.M. Reception – West Market Street Lounge, Fourth Floor
6:30 P.M. Dinner – Market Street Dining Room, Fourth Floor, Introduction: John Williams, Federal Reserve Bank of San Francisco, Speaker: Peter Diamond, Massachusetts Institute of Technology Unemployment and Debt

Wednesday, February 15, 2012

Fed Watch: Again With Potential Output

Tim Duy:

Again With Potential Output, by Tim Duy: St. Louis Federal Reserve President James Bullard graciously responded to my most last post regarding his much considered speech. I actually do not enjoy drawing Bullard's attention, in that it makes me fear that one day I will find that my access to FRED has been disabled.

On what Bullard and I agree on is this: There are different estimates of potential GDP. I discussed this point last year:

Now, before you roll your eyes, as I am inclined to do, note the CBO estimate of potential output is not the only estimate. Menzie Chinn reminds us of the variety of estimates of potential output, some of which suggest that, at the moment, the output gap is actually positive.

In that post I discussed some possible reasons we might consider a downward shock to potential GDP. Near the end, I concluded with this:

While not discounting the probability that some structural factors are at play, the primary challenge facing the US economy is insufficient demand. Optimally, I think the best solution to this challenge is that demand emerges from the external sector – and here I mean NET exports, export and import competing industries. This source of demand would support needed structural change, ultimately for the good of the US and global economies. This adjustment requires a relatively complicated expenditure-switching story on a global basis. I don’t know how to avoid such a story. Barring this outcome, one falls back on fiscal policy, which can surely do the job, but risks maintaining the current pattern of global imbalances. And perhaps such concerns are overblown; after all, so far the fears of a Dollar/current account crisis have not emerged.

Bullard takes a different approach. First, he rejects the CBO estimate offhand because it is not the outcome of "full DSGE model" and "there is nothing about the CBO potential calculation that allows "bubble" levels of output." Before we reject the CBO model outright, it is worth considering it basic effectiveness as a guide:

Potential

I see two recent episodes of output in excess of CBO potential, both of which were associated with what I believe were asset-price bubbles and also induced monetary tightening to stem inflationary pressures (which seems to contradict Bullard's assertion that the CBO estimate leaves no room for bubbles). If this was a significant overestimate of potential output in during the housing bubble, I would have expected more severe inflationary pressures.

Of course, even if the CBO estimates were roughly correct during the bubble, perhaps there has been a significant downward shift in potential. And here again I think Bullard and I can find common ground - potential output is not a measured variable, it is estimated. We really shouldn't blindly follow such estimates, but instead look for corroboration in other data. I tend to fall back on unit labor costs for a signal that wages pressures are outstripping productivity growth and threatening to sustain an inflationary dynamic:

Laborcost

I don't see a reason for concern at this point. But put aside the CBO estimate for a moment, and move onto the crux of Bullard's argument:

If households and businesses had ignored the house price developments as a sort of amusing side show, it would not have been so important. But our rhetoric about the decade suggests otherwise. Households consumed more through cash-out refinancing, developers built more, borrowing increased, Wall Street produced new financially-engineered products to feed the boom, and ancillary industries like transportation thrived. Output went up, and labor supply was higher than it otherwise would have been.

There are two parts to this theory. One is a demand side story - the debt-fueled housing bubble supported consumption and investment, supporting actual GDP growth. I don't think anyone disagrees with that view. The second part of the story is supply side, that the extra activity induced additional labor supply. With the housing bubble now popped, all of the related output and labor supply now melts away:

So, what Irwin's picture is doing is taking all of the upside of the bubble and saying, in effect, "this is where the economy should be." But that peak was based on the widespread belief that "house prices never fall." We will not return to that situation unless the widespread belief returns. I am saying that the belief is not likely to return--house prices have fallen dramatically and people have been badly burned by the crash. So I am interpreting your admonitions on policy as saying, in effect, please reinflate the bubble. First, I am not sure it is possible, and second, that sounds like an awfully volatile future for the U.S., as future bubbles will burst once again.

Now, I agree that the bubble cannot be reflated, nor should it. But this leads into what I don't like about Bullard's story housing bubble story. From my post last July:

Also arguing for a largely demand side explanation to the current weak employment numbers is what looks like a pretty obvious link between asset bubbles and full employment over the last decade. As long as households had a mechanism to support demand, achieving full employment was not a problem. If not households, then why can’t another form of demand fill the gap?

In Bullard's model, the housing bubble popped, and millions of people who were employed are no longer employed, nor should we expect them to be employed (or to reenter the labor force) as there is no way to do so absent another bubble. This seems to me an obvious place for fiscal policy and monetary policy to step into the breach and compensate for the lost demand. That millions of people's labor and output be lost simply because they no longer believe that housing prices don't always rise is a gross waste of resources.

You can tell a story in which that bubble-driven demand was necessary to compensate for negative equilibrium interest rates for risk free assets (driven by excessive saving by Asian central banks and aging demographics in the developed world). Rather than wait for another asset bubble to come along and lift demand, or twiddle your thumbs hoping another recession doesn't hit while you are at the zero bound, you could pull out the old-Bernanke playbook and implement an even more aggressive mix of fiscal and monetary policy to compensate for the lost demand and flood the world with risk free assets.

Now, as to Bullard's appeal instead to a New Keynesian framework, I am more sympathetic. Basu and Fernald opine:

..the major effects of the adverse shocks on potential output seem likely to be ahead of us. For example, the widespread seize-up of financial markets has been especially pronounced only in the second half of 2008. We expect that as the effects of the collapse in financial intermediation, the surge in uncertainty, and the resulting declines in factor reallocation play out over the next several years, short-run potential output growth will be constrained relative to where it otherwise would have been.

This is similar to my thoughts that somewhere in the background there is need for some structural change, toward export and import competing industries. That said, I still find it hard to believe that this is the primary story given that the downturn negatively affected employment across almost all industries. If structural adjustment was the primary issue, I would have anticipated a narrower range of affected industries.

Bottom Line: Bullard and I agree that there are different estimates of potential output. I think that if he wants to throw out the CBO estimate, he needs to provide another estimate to serve as a policy guide. And I would agree that any estimate, CBO included, needs to be continuously monitored in the light of actual incoming data. I still disagree with his asset-bubble model of potential GDP shifts. At its core it is a demand story with maybe a second-order labor supply aspect, and does not explain why no other source of demand can compensate for the lost housing bubble and induce higher labor supply. In the past I have considered reallocation stories similar to what can be derived from a time-varying NK measure of potential output, but again question that this is the primary concern at the moment.

And if you just can't get enough of this debate, Barkley Rosser argues there are arguments in favor of Bullard's position.

Let me add one note of my own. Bullard argues that the difference between the flexible and sluggish price outcomes in a New Keynesian model, measured by the difference between the "sticky price and flexible price level of output," is superior to the standard output gap measure. I have no argument with that in the context of a standard NK model. However this measure is based upon the assumption that Calvo type price rigidity (or something similar) is driving economic fluctuations. If this is not the way in which shocks are being transmitted to the real economy in this crisis, then this measure may not be the right index for setting monetary policy. I think stickiness in housing prices is part of the story, and perhaps wage rigidity as well -- so price stickiness is part of the slow recovery (though it's not clear that housing really fits the Calvo framework) -- but I'm not convinced this fully captures the breakdown in financial intermediation, balance sheet losses, and solvency/liquidity issues (for banks, businesses, and individuals) that characterized the recession, and that are still holding back the recovery. If we haven't captured the important ways in which shocks are affecting the real economy in our models, then the models won't serve as effective guides to policy.

Friday, February 10, 2012

Old versus New Keynesian Models Revisited

In light of comments such as this from Robet Waldmann (who doesn't get shrill with just anyone, so I'm honored to make his list), I think I should elaborate a bit more on my view of Old versus New Keynesian models:

Mark Thoma explains the very basics of New Keynesian economics and I am very rude. I came here clicking a link in a post where you indignantly deny that you are an old Keynesian. I ask what has been added by the very simple inter-temporal optimization?
It seems to me that New Keynesian macro consists (as here) in writing models with optimizing agents which behave the way old Keynesian models behave. I ask why not cut out the middle man?
The model as written has implications other than that there is something like an IS curve. As you note, the true expected value (that is rationally expected) of future GDP affects current GDP. Also the real interest rate affects the rate of growth of consumption.
The problem is that, to the extent new Keynessian models differ from old Keynesian models, the data are not kind to the new models. ...
The newness is all about intertemporal optimization without liquidity constraints. It clearly gives false implications. So the model is modified so that it acts just like an old Keynesian model. How is this a worthwhile activity ?
Notably, the micro foundations are not justified on the assumption that people really intertemporally optimize with rational expectations. ... There is no reason to believe that the definitely false assumptions that new Keynesians like to make are better than any other definitely false assumptions. ...
What, of any value, have macroeconomists added to Keynes? ...

As Robert notes, one big difference between the Old and New Keynesian models is the way in which expectations are treated. The older models do not incorporate expectations, e.g. expected future monetary and fiscal policy, in an acceptable way. When expectations of current and future events are unimportant, the distinction between the Old and New models is not that large. But when expectations do matter -- as I believe they often do -- then the older models can miss important feedback effects.

For example (which some will recognize as a version of the Lucas critique, something Robert refers to indirectly in a part of his post that I omitted), suppose that you take a survey with a hidden camera and discover that there are 10 cars per hour speeding on a given section of road. Thus, you figure, at $200 per ticket the city will make $2,000 per hour (gross) by stationing traffic police along that part of the road.

However, after stationing traffic police at the location, the actual number of tickets that are written is far short of projections. Why is that? It's because people will call/tweet/text their friends and family and warn them -- don't speed today, they are giving tickets. People who travel the route many times a day will notice the officers with radar guns and, if they avoid detection the first time -- or even, I suppose, if they don't -- the will be careful on subsequent passes. Even if people are mostly surprised the first day and revenues are near projections, on day 2 people will expect officers at those locations and be more careful. And if not on day 2, by day 3 they will have surely learned.

And that is the key. The policy works only so long as it is a surprise. If people know about the policy -- if they expect it in advance -- then they will be careful to avoid getting a ticket. If every single person expects the policy, if they know the officers will be there, then there shouldn't be any tickets at all if people behave rationally and there is no "stickiness" in the model that prevents them from taking evasive action.

It's no different with macroeconomic policy. If the government puts a policy in place, e.g. new taxes, then people will do their best to avoid having it harm them. If they can take cost effective actions to avoid the new taxes, then they will. A failure to account for this can lead to big mistakes in forecasts of the impact of new taxes on tax revenue in the same way that a failure to account for the fact that when people know that officers are watching they change their behavior causes revenue projections to be wrong.

Old Keynesian models do not account for these expectation effects satisfactorily, and that's one of the reasons I think the newer models provide a stronger framework to evaluate the effects of policy.

[This is evident in this post where the NK IS curve (which is really an Euler equation -- i.e. in essence a first order condition in a maximization problem) contains the term EtYt+1 while the standard IS curve does not.]

But let me be clear about what I mean by a New Keynesian model. For me it is nothing more than a dynamic, stochastic general equilibrium model with some type of friction tacked onto to it (or arising endogenously, which is more desirable though harder theoretically), and some sort of expectations/learning mechanism embedded within it.

I am not at all convinced, however, that the type of friction that is used in garden variety versions of the NK model -- the Calvo price stickiness mechanism -- is the correct type of friction to characterize the recession we have just experienced (nor do I have much faith in simulations of, say, monetary or fiscal policy that rely upon this assumption to generate policy effects). The type of transmission mechanism that is operable in this model, price stickiness that causes relative prices to go awry, which in turn causes resources to be misdirected, does not seem to me to capture the essence of the breakdown in financial intermediation at the heart of the financial collapse (though stickiness in housing prices and perhaps wages may help to explain the pace of the recovery, but even then it is nowhere near the full story). I think a friction/information problem/market failure of some sort is involved, but the connections between the real and financial sectors that is needed to explain what we have just experienced is not present in these models. There are versions of these models that take steps in this direction, and work is going on right now to try to solve this problem and connect the real economy to the financial sector in a meaningful way -- some of which is making inroads -- but I am just not convinced that the models we have right now properly capture the transmission mechanism for shocks of the type we have just experienced. So there is definitely more work to be done. But I don't think we solve the problem by going back to older constructions.

(However, as I've argued before, in the meantime there are times when the old fashioned IS-LM model delivers better answers than modern models, or at least serves as a better rough guide. This is, in part, because the older models were built to answer the kinds of questions we confront today, questions that arose out of the great Depression, while the NK model was built to explain milder fluctuations, the type we saw in the Great Moderation. These milder fluctuations may very well may be driven by price stickiness of the type characterized by Calvo-type models, but Calvo models don't do so well when confronted with a financial meltdown. But if you are going to take guidance from the older models it is essential that you understand the limitations of the model -- this should not to be done without a thorough knowledge of the pitfalls involved and where they can and cannot be avoided -- the kind of knowledge someone like Paul Krugman surely has at hand.)

I believe, then, that the use of dynamic, stochastic, general equilibrium structures that incorporate expectations in a defensible way is the way to go. Thus, there is no need to discard the set of tools that we have. However, though we have the tools, for the most part anyway, we failed to ask the right questions. This is partly a technical issue. We use representative agent models to set aside the difficult problems involved with aggregation across diverse agents, but models where there are connections between the real and financial sectors often require the representative agent structure to be set aside. But setting this aside means confronting the difficult technical problems directly, and this is an area where we are frantically developing tools right now so that we can overcome this limitation. But in the past it was easier to simply assume a single, representative agent and not deal with with it at all. Handling it in this was wasn't thought to be a big problem since our own overconfidence in ourselves and our abilities led us to believe that we had solved the problem of large, long-lived depressions driven by financial meltdowns. Why go to all the trouble of developing the technical apparatus to ask what if questions about financial meltdowns if there was little chance of this happening? It just wasn't worth the trouble -- then anyway -- now we (hopefully) know better.

Let me also address, briefly, Robert's concern about the rationality assumption. As someone in a department who specializes in learning models (and we are in the process of hiring a great senior faculty member to augment that expertise), I am not going to defend the rationality assumption as it is usually made in DSGE models. But that doesn't mean that plugging a better expectation formation mechanism into these models, along with the proper frictions and expectational feedback mechanisms, can't produce reasonable results.

There is much more to be said about all of this, but this is running fairly long already, so let me just close by noting that when I say I support the newer models over the older, I don't mean to say that the new models have gotten us anywhere near where we need to go. I think they are pointing in the right direction -- though I wouldn't mind if some theorists backed up, and then rebuilt things along a different path so that we'd have more alternatives to test against each other, and more chances to stumble toward better understanding f how the macroeconomy works -- but there is no doubt that there is considerable work yet to be done.

Monday, February 06, 2012

Old versus New Keynesian Models

In response to Tyler Cowen, if the alternative hypothesis to his null that Old Keynesian models have failed is New Keynesian models, and he has rejected the Old in favor of the New, then I don't have many problems with his overall conclusion (which is not to say I agree with every detail of his argument). I thought his alternative hypothesis was broader than just the New Keynesian model, i.e. that he was arguing against Keynesian models of all varieties, but he says "I very much prefer New Keynesianism over Old." So if he is really saying the data support the New Keynesian model, I don't have much to disagree with. (See here for a post highlighting the difference between Old and New Keynesian IS-LM models. I posted this when people tried to claim I support Old Keynesian models as a way of discrediting what I have to say, and I've posted lots of New Keynesian work on fiscal multipliers as well. But people like Williamson, who Tyler points to authoritatively for reasons that escape me, still make the false charge that I promote old-fashioned Keynesian ideas.)

A few notes:

People seem to forget that the federal fiscal policy efforts were almost entirely offset by declines in spending and/or tax increases at the state and local level. Given that, it's not clear why we should expect to see a big effect in the data on output and employment. Fiscal policy at the federal level simply stopped things from getting even worse that they already were -- the bottom would have been much worse without it. Thus, when natural recovery finally began to take hold, it did so from a higher base than without the federal effort (and perhaps started sooner). Think of it this way -- fiscal stimulus allowed us to hold ground we would have lost otherwise -- again things would have been much worse without it -- until the natural recovery process was ready to begin.

I don't see how the fact that the economy is presently recovering at a rate where we will get to full employment by 2019 (or a few years earlier with very optimistic projections) says much about the effectiveness of fiscal policy. It kept things from getting worse, then it ran out, and now we are still looking at a relatively slow recovery by historical standards. What we want to know, but won't find out due to opposition in Congress, is if the recovery would be even faster from this point forward with additional fiscal policy efforts. Nobody ever said the economy wouldn't recover without stimulus, it's the rate of recovery that is at issue. Past efforts have kept GDP and employment from declining even more and made it easier for the natural recovery process to take hold, and additional fiscal policy timed correctly could have helped even more.

On the "timed correctly" point, people also seem to forget about policy lags. The same people who were arguing that infrastructure spending would take too long, that by the time it took hold the economy would already be recovering and it wouldn't be needed, now criticize policy as though it happens instantaneously. It doesn't. How much of the recovery is being driven by the lagged effects of our fiscal policy efforts? That will need to be teased out of the data -- a difficult task since monetary policy easing was going on at the same time and those effects have to be separated from fiscal policy and other factors that affect output and employment. For example, a firm that sees extra spending as a result of tax cuts may do a bit better than otherwise, and then decide to invest in an expansion of the business. It takes time to realize things are a bit better, plan the expansion, and then build it. The expansion is properly attributed to fiscal policy efforts, but this is very hard to see in the data (note that many of the tax measures are still in place, and that spending can also generate these types of effects). Tyler says "Frankly, it is a bit of an embarrassment for many commentators that the (admittedly weak) recovery is coming right after the end of the fiscal stimulus," but I don't see why that necessarily proves the case. Again, past policy efforts allowed us to take off from a higher base, and likely sooner than otherwise, and policy lags (plus the continuation of many tax breaks) imply that fiscal policy could still be active. I think the main effect of fiscal policy was to stop things from getting worse, I am not saying that fiscal policy is still necessarily present to a significant extent, only that we can't rule out that it is still helping without doing the economtric analysis.

Finally, in passing, liquidity traps exist, at least in theory, in both Old and New versions of the model (e.g. see the discussion in Carl Walsh's text on monetary economics). I disagree on with Tyler's point on the liquidity trap -- I think the evidence suggests we did enter a liquidity trap and that it is still a problem -- but in any case the failure to find a liquidity trap does not distinguish one model from the other (though to be fair, it's possible to construct versions of both models where a liqudity trap does not exist, but this is easier in the New Keyneisan model than in thye Old).

Update: Paul Krugman (this is part of a longer post):

...Tyler Cowen now says that he was making the case for New Keynesianism in a recent post that actually said,

The big winners, apart from the American public?: real business cycle theory.

Oh well. I guess we’ve always been at war with Eastasia.

Friday, January 13, 2012

"Ideology and Demand Denial"

Simon Wren-Lewis on ideology and demand denial (I agree with his comments on the "asymmetry" in the views of Keynesian and non-Keynesian economists, and that in many cases ideology is the likely explanation for the differnece):

Ideology and Demand Denial, mainly macro: ...What the debate over fiscal policy has revealed is an underlying generic antagonism towards Keynesian analysis.

There is an asymmetry here. Keynesian economists do not deny that productivity or other supply side shocks can often be important. On the other side there appears to be, among many at least, a belief that Keynesian economics is never relevant. What this amounts to is what Krugman and others call demand denial. Yet the basis in economic theory for demand denial appears very unclear. Say’s Law, or maybe some kind of quantity theory with fixed velocity, would do it – but these were really bad ideas that the profession dismissed many decades ago. ...

Keynes had a number of thoughts on this, as the following from the General Theory shows (‘it’ in the first sentence is a theory that involves demand denial).

That it reached conclusions quite different from what the ordinary uninstructed person would expect, added, I suppose, to its intellectual prestige. That its teaching, translated into practice, was austere and often unpalatable, lent it virtue. That it was adapted to carry a vast and consistent logical superstructure, gave it beauty. That it could explain much social injustice and apparent cruelty as an inevitable incident in the scheme of progress, and the attempt to change such things as likely on the whole to do more harm than good, commanded it to authority. That it afforded a measure of justification to the free activities of the individual capitalist, attracted to it the support of the dominant social force behind authority.

Now beautiful though this passage is, a good deal has changed since 1936. New Keynesian theory is a ‘consistent logical superstructure’, so there is no intellectual prestige involved in denying its relevance (except, perhaps, to fellow believers). Yet two sentences still ring true. The first is the idea that austerity is virtuous. Some of the popular discourse around fiscal policy has moral overtones, perhaps stemming from the idea that governments, like individuals, have to practice self control. Now while I think seeing economics as a morality play is generally unhelpful, in the case of fiscal policy there is a problem of deficit bias: governments over the last few decades have tended, on average, to spend too much or tax too little. (Some particular evidence, and a fairly comprehensive discussion of reasons for deficit bias, can be found here. ...) However deficit bias is a long term problem and a recession is not the time to start dealing with it.

The final sentence from Keynes also still rings true. One explanation for demand denial is that it has ideological roots. In the real world we have the problem of ensuring aggregate demand matches supply, and this requires state intervention – normally monetary policy. For those who want to argue that state intervention in the economy is generally a bad thing, it is embarrassing to acknowledge that there is one area where it is essential. But I get no joy in seeing ideology mess with economics, and so I would be more than happy to be convinced that there was another explanation for demand denial.

Tuesday, January 10, 2012

Simon Wren-Lewis: Mistakes and Ideology in Macroeconomics

Via Chris Dillow, this is Simon Wren-Lewis of Oxford University:

Mistakes and Ideology in Macroeconomics, by Simon Wren-Lewis: Imagine a Nobel Prize winner in physics, who in public debate makes elementary errors that would embarrass a good undergraduate. Now imagine other academic colleagues, from one of the best faculties in the world, making the same errors. It could not happen. However that is exactly what has happened in macro over the last few years.

Where is my evidence for such an outlandish claim? Well here is Nobel prize winner Robert Lucas

But, if we do build the bridge by taking tax money away from somebody else, and using that to pay the bridge builder -- the guys who work on the bridge -- then it's just a wash.  It has no first-starter effect.  There's no reason to expect any stimulation.  And, in some sense, there's nothing to apply a multiplier to.  (Laughs.)  You apply a multiplier to the bridge builders, then you've got to apply the same multiplier with a minus sign to the people you taxed to build the bridge. 

And here  is John Cochrane, also a professor at Chicago, and someone who has made important academic contributions to macroeconomic thinking.

Before we spend a trillion dollars or so, it’s important to understand how it’s supposed to work.  Spending supported by taxes pretty obviously won’t work:  If the government taxes A by $1 and gives the money to B, B can spend $1 more. But A spends $1 less and we are not collectively any better off.

Both make the same simple error. If you spend X at time t to build a bridge, aggregate demand increases by X at time t. If you raise taxes by X at time t, consumers will smooth this effect over time, so their spending at time t will fall by much less than X. Put the two together and aggregate demand rises.

But surely very clever people cannot make simple errors of this kind? Perhaps there is some way to re-interpret such statements so that they make sense. ,,. Brad deLong tries very hard along these lines (see here for example), but just throws up inconsistencies.

I prefer to just note that if any undergraduate or graduate student in the UK wrote this in an exam, they would lose marks. The more interesting question for me is why the errors were made. ...

I want to suggest two answers. The first is familiarity with models. I cannot imagine anyone who teaches New Keynesian economics, or who talked to people who teach New Keynesian economics, making this mistake. This is because, in these models, we do have to worry about aggregate demand. We focus on consumption smoothing, and Ricardian Equivalence... I often tell my first year undergraduate students that if they write anything like ‘Ricardian Equivalence says fiscal stimulus will never work’, they are in danger of failing. ...

Lack of familiarity with New Keynesian economics may be partly explained by the history of macroeconomic thought that I briefly noted in an earlier post. As New Keynesian theory is an ‘add-on’ to the basic Ramsey/RBC model, it is possible to teach macro without getting round to teaching New Keynesian theory. However, what many people find difficult to understand is how monetary policy (or at least monetary policy as seen by pretty much every central bank) could be regarded as an optional add-on in macroeconomics.

The second difference between physics and macro that could lead to more mistakes in the latter is ideology. When you are arguing out of ideological conviction, there is a danger that rhetoric will trump rigour. In the next paragraph Cochrane writes

These ideas changed because Keynesian economics was a failure in practice, and not just in theory. Keynes left Britain 30 years of miserable growth. Richard Nixon said, “We are all Keynesians now,” just as Keynesian policy led to the inflation and economic dislocation of the 1970s--unexpected by Keynesians but dramatically foretold by Milton Friedman’s 1968 AEA address. Keynes disdained investment, where we now all realize that saving and investment are vital to long-run growth. Keynes did not think at all about the incentives effects of taxes. He favored planning, and wrote before Hayek reminded us how modern economies cannot function without price signals.   Fiscal stimulus advocates are hanging on to a last little timber from a sunken boat of ideas, ideas that everyone including they abandoned, and from hard experience. ...

Let’s not worry about where the idea that Keynes disdained investment comes from, or any of the other questionable statements here. This is just polemic: Keynes=fiscal expansion=planning=macroeconomic failure.  It is guilt by association. What on earth does fiscal expansion have to do with planning? Well, they are both undertaken by the state.

I have argued elsewhere that the problem too many macroeconomists have with fiscal stimulus lies not in opposing schools of thought, or the validity of particular theories, or the size of particular parameters, but instead with the fact that it represents intervention by the state designed to improve the working of the market economy. They have an ideological problem with countercyclical fiscal policy. But the central bank is part of the state, and it intervenes to improve how the economy works, so this ideological view would also mean that you played down the role of monetary policy in macroeconomics. So ideology may also help explain a lack of familiarity with the models central banks use to think about monetary policy. In short, an ideological view that distorts economic thinking can lead to mistakes. 

"Mechanisms vs Models"

I need to think about this more before signing onto or rejecting this argument, but here is Chris Dillow's response to the post above this one (and it provides a nice complement to the post that follows by Dan Little):

Mechanisms vs models, by Chris Dillow: Simon Wren-Lewis asks a good question about Robert Lucas‘s and John Cochrane‘s apparent misunderstanding of the balanced budget multiplier: how can very clever people make silly errors?

He suggests two good answers. I’d like to suggest a third. There are two different ways of thinking about economics - the model paradigm and the mechanism paradigm, and the former has crowded out the latter.

Simon says:

If you spend X at time t to build a bridge, aggregate demand increases by X at time t. If you raise taxes by X at time t, consumers will smooth this effect over time, so their spending at time t will fall by much less than X. Put the two together and aggregate demand rises.

This is clear and true. And it would be obvious to anyone using the mechanism paradigm. If you ask “What is the mechanism whereby higher taxes reduce consumer spending?” you pretty much walk into the notion of consumption smoothing. ...

But lots of brilliant economists don’t think merely in terms of mechanisms but rather build impressive models. And like photographers, they tend to fall in love with their models which distracts them both from others’ models and from mechanisms.

This matters, because the importance of particular mechanisms varies from time to time. The social sciences, wrote Jon Elster:

can isolate tendencies, propensities and mechanisms and show that they have implications for behaviour that are often surprising and counter-intuitive. What they are more rarely able to do is state necessary and sufficient conditions under which the various mechanisms are switched on. This is [a] reason for emphasizing mechanisms rather than laws. Laws by their nature are general…Mechanisms by contrast make no claim to generality. (Nuts and bolts for the social sciences, p 9-10)

A good example of this lies in the idea of expansionary fiscal contraction. The virtue of this idea is that it draws our attention to mechanisms (a falling exchange rate, better corporate animal spirits, whatever) whereby fiscal contraction might boost the economy. The drawback is that these mechanisms are just unlikely to operate here and now. Yes, there’s a model that tells us that expansionary fiscal contraction can work. And there are models that say it can’t. But arguing about competing models misses the practical point.

Now, there is an obvious reply to all this. Models have the virtue of ensuring internal consistency, and thus avoiding potentially misleading partial analysis. However, I’m not sure whether this is an argument against mechanisms so much as against poor thinking about them.

When I was a student (back in the 80s!) I learned lots of models (OK, a few), but when I became a practising economist, I found them to be less useful in thinking about the economy than Elsterian thinking about mechanisms.

This is not to dismiss models entirely. I’m just saying that, insofar as they have uses other than as mental gymnastics for torturing students, it is because of the mechanisms contained within them. The parts might be more useful than the sum.

Monday, January 09, 2012

DeLong: A Note On The Ricardian Equivalence Argument Against Stimulus

Brad DeLong:

A Note on Determinants of Aggregate Demand..., by Brad DeLong: Paul Krugman wrote:

A Note On The Ricardian Equivalence Argument Against Stimulus: I’ve tried to explain why Lucas and those with similar views are all wrong several times…. But it just occurred to me that there may be an even more intuitive way to see just how wrong this is: think about what happens when a family buys a house with a 30-year mortgage. Suppose that the family takes out a $100,000 home loan…. If the house is newly built, that’s $100,000 of spending that takes place in the economy. But the family has also taken on debt, and will presumably spend less because it knows that it has to pay off that debt.
But the debt won’t be paid off all at once — and there’s no reason to expect the family to cut its spending right now by $100,000. Its annual mortgage payment will be something like $6,000, so maybe you would expect a fall in spending by $6000; that offsets only a small fraction of the debt-financed purchase.
Now notice that this family is very much like the representative household in a Ricardian equivalence economy, reacting to a deficit financed infrastructure project like Lucas’s bridge; in this case the household really does know that today’s spending will reduce its future disposable income. And even so, its reaction involves very little offset to the initial spending.
How could anyone who thought about this for even a minute — let alone someone with an economics training — get this wrong? And yet as far as I can tell almost everyone on the freshwater side of this divide did get it wrong, and has yet to acknowledge the error.

Let me make two points:

First, in their defense, I would note that if the government buys the same goods as the private sector would have bought anyway and hands them out, then the family would cut its spending right now by $100,000, because then it is both the case that (a) you are poorer because of the future tax liability, and (b) your marginal utility of consumption right now is low because the government is giving you all of this stuff. But since what the government buys (roads, bridges, weather stations, human capital for twelve year olds, etc.) tends to be quite different from what the private sector buys, this defense is extremely shaky and limited.

Second, I remember--long ago--Bob Barro telling a bunch of us that "RE is just a Modigliani-Miller result for the government's balance sheet". And he was right. However, there is nobody who says: "corporate capital structure is irrelevant". Instead, people say: "corporate capital structure is relevant because it can help the corporation (a) create assets that those with preferred habitats are willing to pay healthy premiums to hold, and (b) minimize the appropriate combination of future monitoring, agency, and reorganization costs." Nobody takes MM to be the end of analysis: it is the start.

Yet a lot of people--for reasons I have never understood--take RE to be the end of the analysis...

Friday, December 30, 2011

Paul Krugman: Keynes Was Right

There are quite a few people in denial about one lesson from the crisis -- the value of the Keynesian perspective:

Keynes Was Right, by Paul Krugman, Commentary, NY Times: “The boom, not the slump, is the right time for austerity at the Treasury.” So declared John Maynard Keynes in 1937, even as FDR was about to prove him right by trying to balance the budget too soon, sending the United States economy — which had been steadily recovering up to that point — into a severe recession. Slashing government spending in a depressed economy depresses the economy further; austerity should wait until a strong recovery is well under way.
Unfortunately, in late 2010 and early 2011, politicians and policy makers in much of the Western world believed that they knew better, that we should focus on deficits, not jobs, even though our economies had barely begun to recover... And by acting on that anti-Keynesian belief, they ended up proving Keynes right all over again.
In declaring Keynesian economics vindicated ... the real test ... hasn’t come from the half-hearted efforts of the U.S. federal government to boost the economy, which were largely offset by cuts at the state and local levels. It has, instead, come from European nations like Greece and Ireland that had to impose savage fiscal austerity as a condition for receiving emergency loans — and have suffered Depression-level economic slumps, with real GDP in both countries down by double digits.
This wasn’t supposed to happen, according to ... the Republican staff of Congress’s Joint Economic Committee ... report titled “Spend Less, Owe Less, Grow the Economy.” It ridiculed concerns that cutting spending in a slump would worsen that slump, arguing that spending cuts would improve consumer and business confidence, and that this might well lead to faster, not slower, growth.
They should have known better...
Now, you could argue that Greece and Ireland had no choice about imposing austerity ... other than defaulting on their debts and leaving the euro. But another lesson of 2011 was that America did and does have a choice; Washington may be obsessed with the deficit, but financial markets are, if anything, signaling that we should borrow more. ...
The bottom line is that 2011 was a year in which our political elite obsessed over short-term deficits that aren’t actually a problem and, in the process, made the real problem — a depressed economy and mass unemployment — worse.
The good news, such as it is, is that President Obama has finally gone back to fighting against premature austerity — and he seems to be winning the political battle. And one of these years we might actually end up taking Keynes’s advice, which is every bit as valid now as it was 75 years ago.

Monday, November 14, 2011

Economists as Imagineers?

The editors at Reuters asked if I'd like to respond to an essay by Roger Martin on "The limits of the scientific method in economics and the world." I said I would:

Should economists be “imagineers” of our future?

[My response will make more sense (I hope) if you read Roger Martin's essay first.]

Monday, October 17, 2011

Chow: Usefulness of Adaptive and Rational Expectations in Economics

Gregory Chow of Princeton on rational versus adaptive expectations:

Usefulness of Adaptive and Rational Expectations in Economics, by Gregory C. Chow: ...1. Evidence and statistical reason for supporting the adaptive expectations hypothesis ... Adaptive expectations and rational expectations are hypotheses concerning the formation of expectations which economists can adopt in the study of economic behavior. Since a substantial portion of the economic profession seems to have rejected the adaptive expectations hypothesis without sufficient reason I will provide strong econometric evidence and a statistical reason for its usefulness...
2. Insufficient evidence supporting the rational expectations hypothesis when it prevailed The popularity of the rational expectations hypothesis began with the critique of Lucas (1976) which claimed that existing macro econometric models of the time could not be used to evaluate effects of economic policy because the parameters of these econometric models would change when the government decision rule changed. A government decision rule is a part of the environment facing economic agents. When the rule changes, the environment changes and the behavior of economic agents who respond to the environment changes. Economists may disagree on the empirical relevance of this claim, e.g., by how much the parameters will change and to what extent government policies can be assumed to be decision rules rather than exogenous changes of a policy variable. The latter is illustrated by studies of the effects of monetary shocks on aggregate output and the price level using a VAR. Such qualifications aside, I accept the Lucas proposition for the purpose of the present discussion.
Then came the resolution of the Lucas critique. Assuming the Lucas critique to be valid, economists can build structural econometric models with structural parameters unchanged when a policy rule changes. Such a solution can be achieved by assuming rational expectations, together with some other modeling assumptions. I also accept this solution of the Lucas critique.
In the history of economic thought during the late 1970s, the economics profession (1) accepted the Lucas critique, (2) accepted the solution to the Lucas critique in which rational expectations is used and (3) rejected the adaptive expectations hypothesis possibly because the solution in (2) required the acceptance of the rational expectations hypothesis. Accepting (1) the Lucas critique and (2) a possible response to the Lucas critique by using rational expectations does not imply (3) that rational expectations is a good empirical economic hypothesis. There was insufficient evidence supporting the hypothesis of rational expectations when it was embraced by the economic profession in the late 1970s. This is not to say that the rational expectations hypothesis is empirically incorrect, as it has been shown to be a good hypothesis in many applications. The point is that the economic profession accepted this hypothesis for general application in the late 1970s without sufficient evidence.
3. Conclusions This paper has presented a statistical reason for the economic behavior as stated in the adaptive expectations hypothesis and strong econometric evidence supporting the adaptive expectations hypothesis. ... Secondly, this paper has pointed out that there was insufficient empirical evidence supporting the rational expectations hypothesis when the economics profession embraced it in the late 1970s. The profession accepted the Lucas (1976) critique and its possible resolution by estimating structural models under the assumption of rational expectations. But this does not justify the acceptance of rational expectations in place of adaptive expectations as better proxies for the psychological expectations that one wishes to model in the study of economic behavior. ...

Tuesday, October 11, 2011

The New Keynesian IS-LM and IS-MP Models

David Romer's name has come up several times in recent discussions of the IS-LM and IS-MP models. This is how Romer's new edition of his graduate level macroeconomics book derives the IS-LM and IS-MP curves:

Assume that firms produce labor using labor as the only input, i.e. Y = F(L), F'>0, F''≤0, and that government, international trade, and capital are left out of the model for convenience (so that Y=C+I+G+NX becomes Y=C).

Also assume that "There is a fixed number of infinitely lived households that obtain utility from consumption and from holding real money balances, and disutility from working. For simplicity, we ignore population growth and normalize the number of households to 1. The representative household's objective function is":

Eq1

There is diminishing marginal utility (or increasing marginal disutility), as usual. (Note that assuming money is in the utility function is a standard short-cut. See Walsh for a more extensive discussion of this.)

Next, let the utility functions for consumption and real money balances take their usual constant relative risk aversion forms:

Eq2

There are two assets in the model, money and bonds. Money pays no interest, while bonds receive an interest rate of it. Wealth evolves according to:

Eq3

where At is household wealth at the start of period t, WtLt is nominal income, PtCt is nominal consumption, and Mt is nominal money holdings. This equation says that wealth in period t+1 is equal to the amount of money held at the end of time t plus (1+it) times the bonds help from t to t+1 (the term in parentheses is bonds).

Households take the paths of P, W, and i as given, and they choose the paths of C and M to maximize the present discounted value of utility subject to the flow budget
constraint and a no-Ponzi-game condition (for simplicity, the choice of L is set aside for the moment). Finally, the path of M is chosen by the monetary authority (later, when the MP curve is derived, this assumption will be changed).

The optimization condition (Euler equation) for the intertemporal consumption tradeoff is:

Eq4

We now, in essence, have the New Keynesian IS curve. To see this, take logs of both sides:

Eq5

And using the fact that Y=C, approximating ln(1+r) as r (which holds fairly well when r is small), and dropping the constant for convenience gives:

Eq6

This is the New Keynesian IS curve.  It's just like the ordinary IS curve, except for the lnYt+1 term on the right-hand side (in models with stochastic shocks, this becomes EtlnYt+1, where  EtlnYt+1 is the expected value of Yt+1 given the information available at time t -- often the information set contains only lagged values of variables in the model).

Thus, the big difference between the old IS and the microfounded New Keynesian IS curve is the EtlnYt+1 term on the right-hand side. (Thus, it's relatively easy to amend the traditional model of the IS curve to incorporate the expectation term.)

It can also be shown (e.g. through a variations argument) that the first order condition for money holding is:

Eq7

This implies that:

Eq8

Money demand is increasing in output and decreasing in the nominal
interest rate. If this is set equal to (exogenous) money supply, then we have an LM curve. And if we graph the LM curve along with the New Keynesian IS curve, it looks just like the traditional formulation of the model (with the main difference being the expectation of future output term discussed above).

Lm Finally, as Romer notes:

The ideas captured by the new Keynesian IS curve are appealing and useful... The LM curve, in contrast, is quite problematic in practical applications. One difficulty is that the model becomes much more complicated once we relax Section 6.1's assumption that prices are permanently fixed... A second difficulty is that modern central banks do not focus on the money supply.

The first problem is that the LM curve shifts when P changes, so if there is inflation it will be in constant motion making it hard to use as an anlytical tool. That can be overcome, but the second objection is harder to dismiss. However, it is easy to address. Simply assume that the central bank follows a rule for the interest rate such as:

Eq9

If the central bank adjusts M to ensure this holds, then the money supply is now essentially endogenous (and the interest rate is set externally through the rule). This is an upward sloping curve in r-lnY space, and it is called the MP curve (for monetary policy). It replaces the LM curve in the IS-LM diagram giving us the IS-MP model.

Mp However, it would still be possible to do the analysis with the IS-LM diagram, just put a horizontal line at the fixed interest rate and find the money supply that makes this an equilibrium, but as noted above in the presence of inflation the LM curve shifts out continuously making the model hard to use. Thus, in the presence of inflation and an interst rate rule, the IS-MP formulation is much simpler to use. But for other questions, e.g. quantitative easing at the lower bound or pedagogically examining a money rule, the IS-LM model is often more intuitive.

But the main point is that if you start from (very simple) microfoundations, the resulting model looks a lot like the old IS-LM model. It still needs to be able to handle price-changes, so it's necessary to add a model of supply to the model of demand provided by the IS-MP or the IS-LM diagrams, and the expectation term on the right-hand side of the IS curve is an important difference from the older modeling scheme, but the two models have a lot in common.

Friday, October 07, 2011

Mankiw: The IS-LM model

I haven't bothered with the LM versus MP curve debate because it's all been done before, and because there's really nothing to debate. But I forget that many of you weren't around in 2006 (when the post below was written). For me the bottom line is easy, some questions are easier to answer using the IS-LM model (e.g. see here), some with the IS-MP model (in both cases, coupled with a model of AS), but in general, as noted below, "There is no truly substantive debate here. These two models are alternative presentations of the same set of ideas":

The IS-LM Model, by Greg Mankiw: A reader emails me the following question:

Dear professor Mankiw:

I like your blog a lot. I daily go to it in order to read good economics. Keep up the excellent work!

May I ask you why economists authors of textbooks on intermediate macroeconomics like you keep using the IS-LM model even though we already know that the Central Bank does not set the monetary supply. Instead, it does set the interest rate. Shouldn´t you do like Wendy Carlin and David Soskice in their recent and fantastic book "Macroeconomics: Imperfections, Institutions and Policies" where they replace the LM curve by a monetary rule (for example, a Taylor rule). Wouldn´t that be more representative of what occurs in reality rather than supposing that the institution gets the control of the quantity of money?

Thanks for your attention in advance.

Best,
[name withheld]

... My email correspondent wonders whether it would be better just to jettison the traditional IS-LM model in favor of an alternative framework that ignores the money supply altogether and simply takes an interest-rate rule as given. This approach has been advocated by my old friend David Romer. (Economics trivia fact: I was the best man at David Romer's wedding, and he at mine.) You can find David's approach here (figures here). David calls his alternative presentation the IS-MP model, because it combines an IS curve with a monetary policy reaction function.

The first thing to understand about the choice between IS-LM and IS-MP is that it is not about determining which is the better model of short-run fluctuations. There is no truly substantive debate here. These two models are alternative presentations of the same set of ideas. The key issue in deciding which approach to prefer is not theoretical or empirical but pedagogical.

The IS-LM approach has a long history behind it. That is one reason to stick with it, but it is not dispositive. If I were convinced that the IS-MP model was a clear and substantial step forward, I would switch. So far, however, I am not convinced that the new approach is easier to teach or more intuitive for students.

The key difference between the two approaches is what you hold constant when considering various hypothetical policy experiments. The IS-LM model takes the money supply as the exogenous variable, while the IS-MP model takes the monetary policy reaction function as exogenous. In practice, both the money supply and the monetary policy reaction function can and do change in response to events. Exogeneity here is meant to be more of a thought experiment than it is a claim about the world. The two approaches focus the student's attention on different sets of thought experiments.

I like the IS-LM model because it keeps the student focused on the important connections between the money supply, interest rates, and economic activity, whereas the IS-MP model leaves some of that in the background. The IS-MP model also has some quirky features: In this model, for instance, an increase in government purchases causes a permanent increase in the inflation rate. No one really believes that result as an empirical prediction, for the simple reason that the monetary policy reaction function would change if the natural interest rate (that is, the real interest rate consistent with full employment) changed. This observation highlights that neither model's exogeneity assumption should be taken too seriously.

In the end, I remain open-minded, but at this point I prefer the IS-LM model when teaching (at the intermediate level) about the short-run effects of monetary and fiscal policy. If one were to teach IS-MP to undergrads, I would prefer to do it as an supplement, rather than a substitute, for IS-LM.

Related link: Here (and here in published form) is Paul Krugman's cogent defense of teaching the IS-LM model. The article was written quite a while ago, before IS-MP hit the scene, so I don't know what he would say about this alternative framework. But the Krugman piece is interesting, if only vaguely on point, so I wanted to give it some free advertising.

"Notes on a Worldly Philosopher"

Rajiv Sethi:

Notes on a Worldly Philosopher, by Rajiv Sethi: The very first book on economics that I remember reading was Robert Heilbroner's majesterial history of thought The Worldly Philosophers. I'm sure that I'm not the only person who was drawn to the study of economics by that wonderfully lucid work. Heilbroner managed to convey the complexity of the subject matter, the depth of the great ideas, and the enormous social value that the discipline at its best is capable of generating.

I was reminded of Heilbroner's book by Robert Solow's review of Sylvia Nasar's Grand Pursuit: The Story of Economic Genius. Solow begins by arguing that the book does not quite deliver on the promise of its subtitle, and then goes on to fill the gap by providing his own encapsulated history of ideas. Like Heilbroner before him, he manages to convey with great lucidity the essence of some pathbreaking contributions. I was especially struck by the following passages on Keynes:

He was not without antecedents, of course, but he provided the first workable intellectual apparatus for thinking about what determines the level of “output as a whole.” A generation of economists found his ideas the only available handle with which to grasp the events of the Great Depression of the time... Back then, serious thinking about the general state of the economy was dominated by the notion that prices moved, market by market, to make supply equal to demand. Every act of production, anywhere, generates income and potential demand somewhere, and the price system would sort it all out so that supply and demand for every good would balance. Make no mistake: this is a very deep and valuable idea. Many excellent minds have worked to refine it. Much of the time it gives a good account of economic life. But Keynes saw that there would be occasions, in a complicated industrial capitalist economy, when this account of how things work would break down.

The breakdown might come merely because prices in some important markets are too inflexible to do their job adequately; that thought had already occurred to others. It seemed a little implausible that the Great Depression of the 1930s should be explicable along those lines. Or the reason might be more fundamental, and apparently less fixable. To take the most important example: we all know that families (and other institutions) set aside part of their incomes as saving. They do not buy any currently produced goods or services with that part. Something, then, has to replace that missing demand. There is in fact a natural counterpart: saving today presumably implies some intention to spend in the future, so the “missing” demand should come from real capital investment, the building of new productive capacity to satisfy that future spending. But Keynes pointed out that there is no market or other mechanism to express when that future spending will come or what form it will take... The prospect of uncertain demand at some unknown time may not be an adequately powerful incentive for businesses to make risky investments today. It is asking too much of the skittery capital market. Keynes was quite aware that occasionally a wave of unbridled optimism might actually be too powerful an incentive, but anyone in 1936 would take the opposite case to be more likely.

So a modern economy can find itself in a situation in which it is held back from full employment and prosperity not by its limited capacity to produce, but by a lack of willing buyers for what it could in fact produce. The result is unemployment and idle factories. Falling prices may not help, because falling prices mean falling incomes and still weaker demand, which is not an atmosphere likely to revive private investment. There are some forces tending to push the economy back to full utilization, but they may sometimes be too weak to do the job in a tolerable interval of time. But if the shortfall of aggregate private demand persists, the government can replace it through direct public spending, or can try to stimulate additional private spending through tax reduction or lower interest rates. (The recipe can be reversed if private demand is excessive, as in wartime.) This was Keynes’s case for conscious corrective fiscal and monetary policy. Its relevance for today should be obvious. It is a vulgar error to characterize Keynes as an advocate of “big government” and a chronic budget deficit. His goal was to stabilize the private economy at a generally prosperous level of activity.

This is as clear and concise a description of the fundamental contribution of the General Theory that I have ever read. And it reveals just how far from the original vision of Keynes the so-called Keynesian economics of our textbooks has come. The downward inflexibility of wages and prices is viewed in many quarters today to be the hallmark of the Keynesian theory, and yet the opposite is closer to the truth. The key problem for Keynes is the mutual inconsistency of individual plans: the inability of those who defer consumption to communicate their demand for future goods and services to those who would invest in the means to produce them.

The place where this idea gets buried in modern models is in the hypothesis of "rational expectations." A generation of graduate students has come to equate this hypothesis with the much more innocent claim that individual behavior is "forward looking." But the rational expectations hypothesis is considerably more stringent than that: it requires that the subjective probability distributions on the basis of which individual decisions are made correspond to the objective distributions that these decisions then give rise to. It is an equilibrium hypothesis, and not a behavioral one. And it amounts to assuming that the plans made by millions of individuals in a decentralized economy are mutually consistent. As Duncan Foley recognized a long time ago, this is nothing more than "a disguised form of the assumption of the existence of complete futures and contingencies markets."

It is gratifying, therefore, to see increasing attention being focused on developing models that take expectation revision and calculation seriously. A conference at Columbia earlier this year was devoted entirely to such lines of work. And here is Mike Woodford on the INET blog, making a case for this research agenda:

This postulate of “rational expectations,” as it is commonly though rather misleadingly known... is often presented as if it were a simple consequence of an aspiration to internal consistency in one’s model and/or explanation of people’s choices in terms of individual rationality, but in fact it is not a necessary implication of these methodological commitments. It does not follow from the fact that one believes in the validity of one’s own model and that one believes that people can be assumed to make rational choices that they must be assumed to make the choices that would be seen to be correct by someone who (like the economist) believes in the validity of the predictions of that model. Still less would it follow, if the economist herself accepts the necessity of entertaining the possibility of a variety of possible models, that the only models that she should consider are ones in each of which everyone in the economy is assumed to understand the correctness of that particular model, rather than entertaining beliefs that might (for example) be consistent with one of the other models in the set that she herself regards as possibly correct...

The macroeconomics of the future, I believe, will still make use of general-equilibrium models in which the behavior of households and firms is derived from considerations of intertemporal optimality, but in which the optimization is relative to the evolving beliefs of those actors about the future, which need not perfectly coincide with the predictions of the economist’s model. It will therefore build upon the modeling advances of the past several decades, rather than declaring them to have been a mistaken detour. But it will have to go beyond conventional late-twentieth-century methodology as well, by making the formation and revision of expectations an object of analysis in its own right, rather than treating this as something that should already be uniquely determined once the other elements of an economic model (specifications of preferences, technology, market structure, and government policies) have been settled.

I think that the vigorous pursuit of this research agenda could lead to a revival of interest in theories of economic fluctuations that have long been neglected because they could not be reformulated in ways that were methodologically acceptable to the professional mainstream. I am thinking, in particular, of nonlinear models of business cycles such as those of Kaldor, Goodwin, Tobin and Foley, which do not depend on exogenous shocks to account for departures from steady growth. This would be an interesting, ironic, and welcome twist in the tangled history of the worldly philosophy.

Tuesday, September 27, 2011

"The Importance of Economic History"

Kevin O'Rourke:

The importance of economic history, by Kevin O’Rourke: Paul Krugman is upset about some pretty fanciful accounts of what supposedly happened during the Great Depression, and I don’t blame him. He also wonders whether economics is a progressive science (I am using the word ’science’ in its German sense). Well, one of the things that philosophers of science have argued about in the past is whether, when you have a paradigm shift, you end up losing knowledge, and it’s pretty clear what has happened in this instance. ... [F]or example, I have been reliably informed that a well-known department stopped teaching its undergraduates IS-LM just before the crisis hit in 2008. And the result is that you had people seriously peddling the line that austerity would be expansionary in the wake of the biggest downturn since the 1930s — and these claims were influential in Europe, it seems clear, in the fateful spring and summer of 2010.

One lesson is that it is one thing to play counter-intuitive intellectual parlour games in order to get tenure at a fancy university, but another thing entirely to say something about the real world. For that you need a little common sense.

Another lesson is that economists need at least some training in economic history. No-one with the slightest feeling for historical reality could believe that the Great Depression was due to supply side forces, for example. I observe that Krugman, along with such luminaries as Maurice Obstfeld and Ken Rogoff, did his graduate work in MIT, and I surmise (without having any inside knowledge on the matter) that all three were exposed to Charlie Kindleberger and Peter Temin. They are all distinguished theorists, but also have a historical sensitivity, and this makes them better economists — if your definition of a good economist includes the ability to say sensible things about our very messy real world.

One of the most important things that a bit of history gives you is a sense of the importance of context. A model will work very well in some technological or institutional contexts, but not in others. For example, the Reverend Malthus devised a model that did a pretty decent job of describing the world up to the point that he started writing, but which soon became essentially irrelevant in the century that followed, at least in the richer countries of the world. (He had an economist’s sense of timing.) Sometimes the world is well-described by Keynesian models, and sometimes it is not. And so on.

If the only thing that economic history did was protect us from one-size-fits-all merchants, it would still be worth the price of admission.

[I'd have to agree with his points about the use of models, and about the value of economic history.]

Monday, September 12, 2011

New Old Keynesians?

Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Thursday, September 08, 2011

Krugman: The Profession and the Crisis

Via Tim Taylor, here's Paul Krugman's Presidential Address to the Eastern Economic Association:

The Profession and the Crisis, by Paul Krugman, Eastern Economic Journal: So we’re having an economic crisis. I say “having,” not “had,” because we have by no means recovered. Financial panic may have subsided, stocks may be up, but employment remains far below pre-crisis levels, and unemployment — especially long-term unemployment — remains disastrously high. And while you can make the case that the economy is slowly on the mend, slowly is the operative word. We have already been through two years of economic purgatory, and there's no end in sight.
There is a real sense in which times like these are what economists are for, just as wars are what career military officers are for. OK, maybe I can let microeconomists off the hook. But macroeconomics is, above all, about understanding and preventing or at least mitigating economic downturns. This crisis was the time for the economics profession to justify its existence, for us academic scribblers to show what all our models and analysis are good for.
We have not, to put it mildly, delivered.
What do I mean by that? As I see it, there are three main complaints one can make about economists and their role in the current crisis. First is the complaint that economists fell down on the job by not seeing the crisis coming. Second is the complaint that economists failed even to see the possibility of this kind of crisis — and that by pointing out the possibility, they could have helped head the crisis off. Third is the complaint that they have either failed to offer useful advice on what to do after the crisis struck, or that they have offered such a cacophony of voices as to provide no useful guidance for policy.
As I see it, the first complaint is mostly — though not entirely — unfair. The second is much more substantial: anyone with some knowledge of history should have realized that the age of financial crises was far from over. But the most damning failure of economists, I’d argue, was their acquired ignorance of what I’ve called depression economics — the principles that should govern policy after a financial crisis has left conventional open-market operations impotent.
So let me walk through these issues one at a time. ...[continue reading]...

Tuesday, August 30, 2011

Are Macroeconomists Making Progress?

Some thoughts after attending the 4th Meeting of the Nobel Laureates in Economics:

Are Macroeconomists Making Progress?

Update: I should note that I originally ended the column on a slightly more positive note, but then cut this part to make the word limit (the conference is intended to bring young economists together with the Nobel laureates so that the young economists can benefit from their wisdom):

But I do have hope. The young economists I talked to are eager to move things forward, and refreshingly free of the theoretical and ideological divides that exist in the older generation of economists. I have little faith that the older generation will ever acknowledge the models they spent their lives building are fundamentally flawed. But the disappointment I felt listening to the older and supposedly wiser economists at the conference give conflicting advice based upon failed models was absent in these conversations with the next generation. Some day they too will dig in their heels and defend their lives’ work against challenges, but for now it's up to them to forge a new way forward.

Sunday, August 28, 2011

Video: Ed Prescott on "The Current State of Aggregate Economics"

Stephen Williamson says, "If you have never seen an Ed Prescott talk, here is your chance. Don't pay attention to how he's saying it, just listen closely. This is interesting, just to hear how he thinks about what he does."

I'd guess I was far less impressed, but here's the video so you can make up your own mind:

Saturday, August 06, 2011

links for 2011-08-06

Monday, July 11, 2011

Fall Into the Gap Forever?

Here are three graphs showing the gaps in output, consumption, and employment that have opened up since the recession:

Gap1
Real GDP

Gap2 Real Consumption

Gap3 Employment

In all three cases, we appear to be growing along a lower growth path than before. The question is whether we are stuck on these lower growth paths forever. Will we ever recover the old growth path, in full or in part? That is, how much of the change in the GDP growth path is permanent, and how much is temporary?

This is important because the level of employment is a function of the level of output. If we stay on the lower growth path, then we will have a permanent gap in employment -- most of the 14+ million unemployed will have little hope of finding work. We can share the jobs that exist, something like that, but we won't ever recover the jobs that were lost.

However, graphs like the next one point to temporary changes as the dominant feature of output fluctuations. Sometimes the deviations from trend are highly persistent, as in the Great Depression, but eventually we recover. The trend has not varied much for over 100 years. It does vary slightly over time, but the variance in the red line is small relative to the overall variance in output:

Gdp-trend

But as Brad DeLong notes:

I ... find the picture impressive. But the U.S. is exceptional. Other countries do not show the same pattern: for example, the United Kingdom never recovered to trend after its post-World War I recession.
And past performance is no guarantee of future results...

So it's not 100% certain that we will bounce back to the long-run trend. This time could be different (Brad shows that Britain has had permanent shocks).

The argument that variation in GDP, consumption, employment, and other macroeconomic variables is due to permanent shifts in the trend rather than cyclical variation around the trend is precisely the argument used by Real Business Cycle theorists and adherents to the classical school more generally to undermine the case for countercyclical monetary and fiscal policy. The more of the variation in output that can be explained by variation in trend (i.e. by supply-shocks), the less that is left over to be explained by aggregate demand shocks. With less variation caused by demand, there is less need for demand stabilization policies.

(This is also what is at issue in the structural versus cyclical unemployment debate. The more that the variation in unemployment is attributed to structural change, and hence to variation in trend, the less that is left over to be attributed to cyclical factors. With less cyclical variation, the case for policy is weakened.)

Now, none of the above implies that the argument that some of the variation in output is due to variation in the trend is wrong (see here for a summary of the debate). I believe that some of the variation in output is, in fact, due to permanent changes in the trend rate of output. The trend is not a perfectly straight line. The question is the size of the variation in trend relative to the size of the variation due to demand shocks (a question that has not been very decisively answered in the empirical literature, e.g. see the early work on this by Stock and Watson, and Blanchard and Quah). My read of the evidence is that the amount of variation in trend relative to cycle is nowhere near large enough to undermine the case that demand shocks are important components of aggregate fluctuations.

The point I want to make is that the "it's all explained by a new normal" story adopts the conservative point of view that variation in output is mostly due to supply shocks rather than fluctuation in demand. In this case, there's little room for monetary or fiscal policy to help. However, there are good theoretical, empirical, and -- if you are into such things -- ideological reasons to be wary of making the "it's the new normal" or, equivalently, the "shocks are mostly permanent" argument. The persistence of the trend in output is evident in the graph above, and while this time may, in fact, be different, those making the argument -- some of whom are on the left -- should be fully aware of the conservative viewpoint this argument embraces. The argument that we are on a permanently lower growth path is an argument that there's nothing we can do, nothing we need do, and nothing we should do (except, perhaps, measures such as sharing the jobs we have more broadly). This is the new normal and you may as well get used to it.

My view is different. I believe we will eventually recover to a new growth path that is near, but a bit lower than the old one. The recovery will be slow, but we will get there eventually. How long it takes depends, in part, upon how aggressively we attack the problem with monetary and fiscal policy measures ( or how much we make things worse with mistakes in either area such as premature deficit reduction or interest rate hikes).

 There is plenty of evidence in the historical record to suggest it's possible to largely recover from the recession, and I am not ready to accept the conclusion that we must resign ourselves to a growth path so far below the historical norm. If it eventually turns out that we are on such a disheartening long-run path and there's no way to change it, so be it, but I'm not ready to give up just yet.