Category Archive for: Macroeconomics [Return to Main]

Thursday, November 03, 2016

Mainstream Economics

Simon Wren-Lewis:

Ann Pettifor on mainstream economics: Ann has a article that talks about the underlying factor behind the Brexit vote. Her thesis, that it represents the discontent of those left behind by globalization, has been put forward by others. Unlike Brad DeLong, I have few problems with seeing this as a contributing factor to Brexit, because it is backed up by evidence, but like Brad DeLong I doubt it generalizes to other countries. Unfortunately her piece is spoilt by a final section that is a tirade against mainstream economists which goes way over the top. ...

Most economists have certainly encouraged the idea that globalization would increase overall prosperity, and they have been proved right. It is also true that many of these economists did not admit or stress enough that there would be losers as a result of this process who needed compensating from the increase in aggregate prosperity. But once again I doubt very much that anything would have changed if they had. And if they didn’t think enough about it in the past, they are now: see Paul De Grauwe here for example.

There is a regrettable (but understandable) tendency by heterodox economists on the left to try and pretend that economics and neoliberalism are somehow inextricably entwined. The reality is that neoliberal advocates do use some economic ideas as justification, but they ignore others which go in the opposite direction. As I often point out, many more academic economists spend their time analyzing market imperfections than trying to show markets always work on their own. They get Nobel prizes for this work. I find attempts to suggest that economics somehow helped create austerity particularly annoying, as I (and many others) have spent many blog posts showing that economic theory and evidence demonstrates that austerity was a huge mistake.

Wednesday, October 26, 2016

Being Honest about Ideological Influence in Economics

Simon Wren-Lewis:

Being honest about ideological influence in economics: Noah Smith has an article that talks about Paul Romer’s recent critique of macroeconomics. ... He says the fundamental problem with macroeconomics is lack of data, which is why disputes seem to take so long to resolve. That is not in my view the whole story.
If we look at the rise of Real Business Cycle (RBC) research a few decades ago, that was only made possible because economists chose to ignore evidence about the nature of unemployment in recessions. There is overwhelming evidence that in a recession employment declines because workers are fired rather than choosing not to work, and that the resulting increase in unemployment is involuntary (those fired would have rather retained their job at their previous wage). Both facts are incompatible with the RBC model.
In the RBC model there is no problem with recessions, and no role for policy to attempt to prevent them or bring them to an end. The business cycle fluctuations in employment they generate are entirely voluntary. RBC researchers wanted to build models of business cycles that had nothing to do with sticky prices. Yet here again the evidence was quite clear...
Why would researchers try to build models of business cycles where these cycles required no policy intervention, and ignore key evidence in doing so? The obvious explanation is ideological. I cannot prove it was ideological, but it is difficult to understand why - in an area which as Noah says suffers from a lack of data - you would choose to develop theories that ignore some of the evidence you have. The fact that, as I argue here, this bias may have expressed itself in the insistence on following a particular methodology at the expense of others does not negate the importance of that bias. ...
I suspect there is a reluctance among the majority of economists to admit that some among them may not be following the scientific method but may instead be making choices on ideological grounds. This is the essence of Romer’s critique, first in his own area of growth economics and then for business cycle analysis. Denying or marginalizing the problem simply invites critics to apply to the whole profession a criticism that only applies to a minority.

Tuesday, October 18, 2016

Yellen Poses Important Post-Great Recession Macroeconomic Questions

Nick Bunker:

Yellen poses important post-Great Recession macroeconomic questions: Last week at a Federal Reserve Bank of Boston conference, Federal Reserve Chair Janet Yellen gave a speech on macroeconomics research in the wake of the Great Recession. She ... lists four areas for research, but let’s look more closely at the first two groups of questions that she elevates.
The first is the influence of aggregate demand on aggregate supply. As Yellen notes, the traditional way of thinking about this relationship would be that demand, a short-run phenomenon, has no significant effect of aggregate supply, which determines long-run economic growth. The potential growth rate of an economy is determined by aggregate supply...
Yellen points to research that increasingly finds so called hysteresis effects in the macroeconomy. Hysteresis, a term borrowed from physics, is the idea that short-run shocks to the economy can alter its long-term trend. One example of hysteresis is workers who lose jobs in recessions and then aren’t drawn back into the labor market bur rather are permanently locked out... Interesting new research argues that hysteresis may affect not just the labor supply but also the rate of productivity growth.
If hysteresis is prevalent in the economy, then U.S. policymakers need to rethink their fiscal and monetary policy priorities. The effects of hysteresis may mean that economic recoveries need to run longer and hotter than previous thought in order to get workers back into the labor market or allow other resources to get back into full use.
The other set of open research questions that Yellen raises is the influence of “heterogeneity” on aggregate demand. In many models of the macroeconomy, households are characterized by a representative agent... In short, they are assumed to be homogeneous. As Yellen notes in her speech, overall home equity remained positive after the bursting of the housing bubble, so a representative agent would have maintained positive equity in their home.
Yet a wealth of research in the wake of the Great Recession finds that millions of households whose mortgages were “underwater” and didn’t have positive wealth—a big reason for the severity of the downturn. Ignoring this heterogeneity in the housing market and its effects on economic inequality seems like something modern macroeconomics needs to resolve. Economists are increasingly moving in this direction, but even more movement would very helpful.
Yellen raises other areas of inquiry in her speech, including better understanding how the financial system is linked to the real economy and how the dynamics of inflation are determined. ... As Paul Krugman has noted several times over the past several years, the Great Recession doesn’t seem to have provoked the same rethink of macroeconomics compared to the Great Depression, which ushered in Keynesianism, and the stagflation of the 1970s, which led to the ascendance of new classical economics. The U.S. economy is similarly dealing with a “new normal.” Macroeconomics needs to respond this reality.

Tuesday, October 11, 2016

Ricardian Equivalence, Benchmark Models, and Academics Response to the Financial Crisis

Simon Wren-Lewis:

Ricardian Equivalence, benchmark models, and academics response to the financial crisis: In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:

“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified. ...

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about... These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.

Sunday, October 02, 2016

How Seriously Should We Take the New Keynesian Model?

Brad DeLong:

How Seriously Should We Take the New Keynesian Model?: Nick Rowe continues his long twilight struggle to try to take the New Keynesian-DSGE seriously, to understand what the model says, and to explain what is really going on in the New Keynesian DSGE model to the world. I said that I think this is a Sisyphean task. Let me expand on that here...

In the basic New Keynesian model, you see, the central bank “sets the nominal interest rate” and that, combined with the inflation rate, produces the real interest rate that people face when they use their Euler equation to decide how much less (or more) than their income they should spend. When the interest rate high, saving to spend later is expensive and so people do less of it and spend more now. When the interest rate is low, saving to spend later is cheap and so people do more of it and spend less now.

But how does the central bank “set the nominal interest rate” in practice? What does it physically (or, rather, financially) do?

¯_(ツ)_/¯

In a normal IS-LM model, there are three commodities:

  1. currently-produced goods and services,
  2. bonds, and
  3. money.

In a normal IS-LM model, the central bank raises the interest rate by selling some of the bonds it has in its portfolio for cash and burns the cash it thus collects (for cash is, remember, nothing but a nominal liability of the central bank). It thus creates an excess supply (at the previous interest rate) for bonds and an excess demand (at the previous interest rate) for cash. Those wanting to hold more cash slow down their purchases of currently-produced goods and services (thus creating an excess supply of currently produced goods and services) and sell some of their bonds (thus decreasing the excess supply of bonds). Those wanting to hold fewer bonds sell bonds for cash. Thus the interest rate rises, the flow quantity of currently-produced goods and services falls, and the sticky price of currently-produced goods and services stays where it is. Adjustment continues until supply equals demand for both money and bonds at the new equilibrium interest rate and at a new flow quantity of currently produced goods and services.

In the New Keynesian model?…

Nick Rowe: Cheshire Cats and New Keynesian Central Banks:

How can money disappear from a New Keynesian model, but the Central Bank still set a nominal rate of interest and create a recession by setting it too high?…

Ignore what New Keynesians say about their own New Keynesian models and listen to me instead. I will tell you how it is possible…. The Cheshire Cat has disappeared, but its smile remains. And its smile (or frown) has real effects. The New Keynesian model is a model of a monetary exchange economy, not a barter economy. The rate of interest is the rate of interest paid on central bank money, not on bonds. Raising the interest rate paid on money creates an excess demand for money which creates a recession. Or it makes no sense at all.

I will take “it makes no sense at all” for $2000, Alex…

Either there is a normal money-supply money-demand sector behind the model, which is brought out whenever it is wanted but suppressed whenever it raises issues that the model builders want ignored, or it makes no sense at all…

Wednesday, September 21, 2016

Trouble with Macroeconomics, Update

Paul Romer:

Trouble with Macroeconomics, Update: My new working paper, The Trouble with Macroeconomics, has generated some interesting reactions. Here are a few responses...

Monday, September 19, 2016

How to Build a Better Macroeconomics

Narayana Kocherlakota:

How to Build a Better Macroeconomics: Methodology Specifics I want to follow up on my comments about Paul Romer’s interesting recent piece by being more precise about how I believe macroeconomic research could be improved.

Macro papers typically proceed as follows:

  1. Question stated.
  2. Some reduced form analysis to "motivate" the question/answer.
  3. Question inputted into model. Model is a close variant of prior models grounded in four or five 1980s frameworks. The variant is generally based on introspection combined with some calibration of relevant parameters.
  4. Answer reported.

The problem is that the prior models have a host of key behavioral assumptions that have little or no empirical grounding. In this pdf, I describe one such behavioral assumption in some detail: the response of current consumption to persistent interest rate changes.

But there are many other such assumptions embedded in our models. For example, most macroeconomists study questions that depend crucially on how agents form expectations about the future. However, relatively few papers use evidence of any kind to inform their modeling of expectations formation. (And no, it’s not enough to say that Tom Sargent studied the consequences of one particular kind of learning in the late 1980s!) The point is that if your paper poses a question that depends on how agents form expectations, you should provide evidence from experimental or micro-econometric sources to justify your approach to expectation formation in your particular context.

So, I suggest the following would be a better approach:

  1. Question
  2. Thorough theoretical analysis of key mechanisms/responses that are likely to inform answer to question (perhaps via "toy" models?)
  3. Find evidence for ranges of magnitudes of relevant mechanisms/responses.
  4. Build and evaluate a range of models informed by this evidence. (Identification limitations are likely to mean that, given available data, there is a range of models that will be relevant in addressing most questions.)
  5. Range of answers to (1), given (4).

Should all this be done in one paper? Probably not. I suspect that we need a more collaborative approach to our questions - a team works on (2), another team works on (3), a third team works on (4) and we arrive at (5). I could readily see each step as being viewed as valuable contributions to economic science.

In terms of (3) - evidence - our micro colleagues can be a great source on this dimension. In my view, the most useful labor supply paper for macroeconomists in the past thirty years is this one - and it’s not written by a macroeconomist.

(If people know of existing papers that follow this approach, feel free to email me a reference at nkocherl@ur.rochester.edu.)

None of these ideas are original to me. They were actually exposited nearly forty years ago.

The central idea is that individual responses can be documented relatively cheaply, occasionally by direct experimentation, but more commonly by means of the vast number of well-documented instances of individual reactions to well-specified environmental changes made available "naturally" via censuses, panels, other surveys, and the (inappropriately maligned as "casual empiricism") method of keeping one's eyes open.

I’m not totally on board with the author in what he says here. I'm a lot less enthralled by the value of “casual empiricism” in a world in which most macroeconomists mainly spend their time with other economists, but otherwise agree wholeheartedly with these words. And I probably see more of a role for direct experimentation than the author does. But those are both quibbles.

And these words from the same article seem even more apt:

Researchers … will appreciate the extent to which … [this agenda] describes hopes for the future, not past accomplishments. These hopes might, without strain, be described as hopes for a kind of unification, not dissimilar in spirit from the hope for unification which informed the neoclassical synthesis. What I have tried to do above is to stress the empirical (as opposed to the aesthetic) character of these hopes, to try to understand how such quantitative evidence about behavior as we may reasonably expect to obtain in society as it now exists might conceivably be transformed into quantitative information about the behavior of imagined societies, different in important ways from any which have ever existed. This may seem an intimidatingly ambitious way to state the goal of an applied subfield of a marginally respectable science, but is there a less ambitious way of describing the goal of business cycle theory?

Somehow, macroeconomists have gotten derailed from this vision of a micro-founded unification and retreated into a hermetically sealed world, where past papers rely on prior papers' flawed foundations. We need to get back to the ambitious agenda that Robert Lucas put before us so many years ago.

(I admit that I'm cherry-picking like crazy from Lucas' 1980 classic JMCB piece. For example, one of Lucas' main points in that article was that he distrusted disequilibrium modeling approaches because they gave rise to too many free parameters. I don't find that argument all that relevant in 2016 - I think that we know more now than in 1980 about how micro-data can be fruitfully used to discipline our modeling of firm behavior. And I would suspect that Lucas would be less than fully supportive of what I write about expectations - but I still think I'm right!)

Sunday, September 04, 2016

Telling Macro Stories with Micro

Claudia Sahm:

Telling macro stories with micro: Don't let the equations, data, or jargon fool you, economists are avid storytellers. Our "stories" may not fit neatly in the seven universal plots but after awhile it's easy to spot some patterns. A good story paper in economics, according to David Romer, has three characteristics: a viewpoint, a lever, and a result.
Most blog or media coverage of an economics paper focuses on the result. Makes sense given the audience but buyer beware. Economists dissecting a paper spend more time on the lever, the how-did-they-get-the-result part. And coming up with new levers is a big chunk of research. The viewpoint--the underlying assumptions, the what's-central-to-the-story--tends to get short shrift. Of course, the viewpoint matters (often that's what defines a a story as economics), but it usually holds across many papers. Best to focus the new stuff.
Except when the viewpoint comes under scrutiny, then the stories can really change. ...

How much does micro matter for macro?

One long-standing viewpoint in economics is that changes in the macro-economy can largely be understood by studying changes in macro aggregates. Ironically, this viewpoint even survived macro's push to micro foundations with a "representative agent" stepping in as the missing link between aggregate data and micro theory. As a macro forecaster, I understand the value of the aggregates-only simplification. As an applied micro researcher, I am pretty sure it fails us from time to time. Thankfully, an ever-growing body of research and commentary is helping to identify times when differences at the micro level are relevant for macro outcomes. This is not new--issues of aggregation in macro go waaay back--but our levers, with rich, timely micro data and high-powered computation, are improving rapidly.

I focus in this post on differences in household behavior, particularly related to consumer spending, since that's the area I know best. And I want to discuss results from an ambitious new paper: "Macroeconomics and Household Heterogeneity" by Krueger, Mitman, and Perri. tldr: I am skeptical of their results, above all, the empirics, but I really like what they are trying to do, to shift the macro viewpoint. More on this paper below, but also want to set it in the context of macro storytelling. ...

There's quite a bit more.

Monday, August 15, 2016

What's Useful about DSGE Models?

George Evans responds to the recent discussion on the usefulness of DSGE models:

Here is what I like and have found most useful about Dynamic Stochastic General Equilibrium (DSGE) models, also known as New Keynesian (NK), models. The original NK models were low dimensional – the simplest version reduces to a 3-equation model, while DSGE models are now typically much more elaborate. What I find attractive about these models can be stated in terms of the basic NK/DSGE model.
First, because it is a carefully developed, micro- founded model incorporating price frictions, the NK model makes it possible to incorporate in a disciplined way the various additional sectors, distortions, adjustment costs, and parametric detail found in many NK/DSGE models. Theoretically this is much more attractive than starting with a reduced form IS-LM model and adding features in an ad hoc way. (At the same time I still find ad hoc models useful, especially for teaching and informal policy analysis, and the IS-LM model is part of the macroeconomics cannon).
Second, and this is particularly important for my own research, the NK model makes explicit and gives a central role to expectations about future economic variables. The standard linearized three-equation NK model in output, inflation and interest rates has current output and inflation depending in a specified way on expected future output and inflation. The dependence of output on expected future output and future inflation comes through the household dynamic optimization condition, and the dependence of inflation on expected future inflation arises from the firm’s optimal pricing equation. The NK model thus places expectations of future economic variables front and center, and does so in a disciplined way.
Third, while the NK model is typically solved under rational expectations (RE), it can also be viewed as providing the temporary equilibrium framework for studying the system under relaxations of the RE hypothesis. I particularly favor replacing RE with boundedly rational adaptive learning and decision-making (AL). Incorporating AL is especially fruitful in cases where there are multiple RE solutions, and AL brings out many Keynesian features of the NK model that extend IS-LM. In general I have found micro-founded macro models of all types to be ideal for incorporating bounded rationality, which is most naturally formulated at the agent level.
Fourth, while the profession as a whole seemed to many of us slow to appreciate the implications of the NK model for policy during and following the financial crisis, this was not because the NK model was intrinsically defective (the neglect of financial frictions by most, though not all DSGE modelers, was also a deficiency in most textbook IS-LM models). This was really, I think, because many macro economists using NK models in 2007-8 did not fully appreciate the Keynesian mechanisms present in the model.
However, many of us were alert to the NK model fiscal policy implications during the crisis. For example, in Evans, Guse and Honkapohja (“Liquidity traps, learning and stagnation,” 2008, European Economic Review), using an NK model with multiple RE solutions because of the liquidity trap, we showed, using the AL approach to expectations, that when there is a very large negative expectations shock, fiscal as well as monetary stimulus may be needed, and indeed a temporary fiscal stimulus that is large enough and early enough may be critical for avoiding a severe recession or depression. Of course such an argument could have been made using extensions of the ad hoc IS-LM model, but my point is that this policy implication was ready to be found in the NK model, and the key results center on the primacy of expectations.
Finally, it should go without saying that NK/DSGE modeling should not be the one and only style. Most graduate-level core macro courses teach a wide range of macro models, and I see a diversity of innovations at the research frontier that will continue to keep macroeconomics vibrant and relevant.

Tuesday, August 09, 2016

Murky Macroeconomics

Paul Krugman:

Murky Macroeconomics: ...I realized something not too flattering about myself: I’m feeling nostalgic for 2011 or so.
Why? It was, of course, a terrible time for much of the world, and especially for anyone without a job. But for ... an economist ... it was a time of wonderful intellectual clarity. Liquidity-trap macroeconomics ... had become the story of the day. And the basic message of the models — that everything changes when you hit the zero lower bound — was being overwhelmingly confirmed by experience.
The thing is, it was all beautifully hard-edged: a crisp boundary at zero, a sharp change in the impact of monetary and fiscal policy when you hit that boundary. And the predictions we made came out consistently right.
But now things have gotten a bit, well, murky. The zero lower bound is not, it turns out, quite as hard a boundary as we thought. ...I’d be surprised if any central bank is willing to go much if at all below minus one percent — but it turns out to be a sort of a fuzzy no-man’s-land rather than a line that cannot be crossed.
More important, probably, is the fact that two of the major advanced economies — the US and, believe it or not, Japan — are arguably quite close to full employment. We don’t know how close... But you can no longer argue that supply limits are no longer relevant.
Correspondingly, you can also no longer argue with confidence that there can be no crowding out, because the Fed won’t raise rates. You can argue that it shouldn’t — and I would — but we are maybe, possibly, on our way out of the liquidity trap.
So we’re not in the simple, depressed-economy world of 2011 anymore. But here’s the thing: we’re not in what we used to call a normal macroeconomic situation either. Maybe we’re close to full employment, but maybe not, and that’s with near-zero interest rates; also, it’s all too easy to imagine adverse shocks in the near future, and not at all clear how the Fed could or would respond. We are, if you like, half-out of the liquidity trap, with one foot on dry land — but the other foot is still hanging over the edge, and it wouldn’t take much to topple us right back in.
What I would argue is that in this murky, fragile situation we should be conducting policy largely as if we were still in the trap — because we badly need to get both feet firmly on dry land with some distance between us and the quicksand. ... But it’s not the crystalline case we used to be able to make.
Still, we need to deal with this murky situation right, which means embracing the uncertainty as part of the argument. Make murkiness great again!

Thursday, June 30, 2016

The Continued Rigidity of Wages in the United States

Nick Bunker:

The continued rigidity of wages in the United States: “Wage rigidity” is an important feature of many models of the macroeconomy...
Some research on wage rigidity challenges this assumption. Pointing to data on individual wage growth, some economists argue that the wages of new hires is more important. If wages are really rigid, then the inflation-adjusted wages of new hires won’t vary as recessions come and go. Yet these researchers can point to data showing the wages of new hires moving up during economic expansions and down during recessions. So perhaps wages are more flexible than some think.
Now comes a new paper that shows how the cyclical nature of the wages of new hires isn’t really evidence against wage rigidity. The working paper, by economists Mark Gertler of New York University, Christopher Huckfeldt of Cornell University, and Antonella Trigari of Bocconi University, was released earlier this month. The three economists’ major point is to show that looking at the wages of all new hires in the United States is lumping together two groups of workers with different experiences. There are new hires who were previously unemployed and then there are new hires who were previously employed. ...
What they find is that the trends in wages for these two different groups of new hires are clearly different. The wages for new hires from the unemployment line don’t vary much more over time than the wages of already employed workers. But the wages of new hires from the ranks of the already employed do vary. This phenomenon, however, is less about flexible wages and more about workers moving up the job ladder, which mostly only happens during economic expansions, and is the reason why wages for these new hires move with economic cycles.
Outside of the implications for macroeconomic models of the labor market during recessions, the results from Gertler, Huckfeldt, and Trigari are also a reminder of the effects an economic downturn can have on workers’ career earnings. Recessions hinder the hiring of already employed workers, which hurts their chances of climbing the job ladder and future wage gains. Downturns don’t just harm the workers who lose jobs, but also the ones who keep their jobs.

Tuesday, June 28, 2016

Why the Public Has Stopped Paying Attention to Economists

I have a new column:

Why the Public Has Stopped Paying Attention to Economist: The predictions from economists about the consequences of Brexit were widely ignored. That shouldn’t be surprising. In recent years the public has lost faith the in the economics profession.
One reason for the lack of faith is the failure to predict the Great Recession, but the public’s dismissal of macroeconomists is based upon more than the failure to foresee the dangers the housing bubble posed for the economy. It is also due to false promises about the benefits to the working class from globalization, tax cuts for the wealthy, and trade agreements – promises that were often used to support ideological and political goals or to serve special interests. ...

Sunday, June 19, 2016

If the Modifications Needed to Accommodate New Observations Become Too Baroque ...

The New Keynesian model is fairly pliable, and adding bells and whistles can help it to explain most of the data we see, at least after the fact. Does that mean we should be more confident in it its ability to "embody any useful principle," or less?:

... A famous example of different pictures of reality is the model introduced around AD 150 by Ptolemy (ca. 85—ca. 165) to describe the motion of the celestial bodies. ... In Ptolemy’s model the earth stood still at the center and the planets and the stars moved around it in complicated orbits involving epicycles, like wheels on wheels. ...

It was not until 1543 that an alternative model was put forward by Copernicus... Copernicus, like Aristarchus some seventeen centuries earlier, described a world in which the sun was at rest and the planets revolved around it in circular orbits. ...

So which is real, the Ptolemaic or Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true..., our observations of the heavens can be explained by assuming either the earth or the sun to be at rest. Despite its role in philosophical debates over the nature of our universe, the real advantage of the Copernican system is simply that the equations of motion are much simpler in the frame of reference in which the sun is at rest.

... Elegance ... is not something easily measured, but it is highly prized among scientists because laws of nature are meant to economically compress a number of particular cases into one simple formula. Elegance refers to the form of a theory, but it is closely related to a lack of adjustable elements, since a theory jammed with fudge factors is not very elegant. To paraphrase Einstein, a theory should be as simple as possible, but not simpler. Ptolemy added epicycles to the circular orbits of the heavenly bodies in order that his model might accurately describe their motion. The model could have been made more accurate by adding epicycles to the epicycles, or even epicycles to those. Though added complexity could make the model more accurate, scientists view a model that is contorted to match a specific set of observations as unsatisfying, more of a catalog of data than a theory likely to embody any useful principle. ...
[S]cientists are always impressed when new and stunning predictions prove correct. On the other hand, when a model is found lacking, a common reaction is to say the experiment was wrong. If that doesn’t prove to be the case, people still often don’t abandon the model but instead attempt to save it through modifications. Although physicists are indeed tenacious in their attempts to rescue theories they admire, the tendency to modify a theory fades to the degree that the alterations become artificial or cumbersome, and therefore “inelegant.” If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model. ...
[Hawking, Stephen; Mlodinow, Leonard (2010-09-07). The Grand Design. Random House, Inc.. Kindle Edition.]

Saturday, June 11, 2016

Macroeconomics, Fantasy, Reality, and Intellectual Utility…

Brad DeLong:

Macroeconomics, Fantasy, Reality, and Intellectual Utility…: A very nice overview piece this morning from smart young whippersnapper Noah Smith:
Noah Smith: Economics Struggles to Cope With Reality: "Four different activities... go by the name of macroeconomics... But they actually have relatively little to do with each other.... Coffee-house macro.... Finance macro.... Academic macro.... Fed macro....
However, I think he has picked the wrong four. ...

Thursday, June 02, 2016

How to Teach Intermediate Macroeconomics after the Crisis?

I am going to have to redo the videos and other materials for my online macroeconomics course that uses this text:

How to Teach Intermediate Macroeconomics after the Crisis?, by Olivier Blanchard: Having just concluded a seven-year run as chief economist of the International Monetary Fund, and having to rewrite the seventh edition of my undergraduate macroeconomics book (link is external) , I had to confront the issue: How should we teach macroeconomics to undergraduates after the crisis? Here are some of my conclusions (I shall focus here on the short and medium runs; it will take another blog to discuss how we should teach growth theory).
The Investment-Saving (IS) Relation The IS relation remains the key to understanding short-run movements in output. In the short run, the demand for goods determines the level of output. A desire by people to save more leads to a decrease in demand and, in turn, a decrease in output. Except in exceptional circumstances, the same is true of fiscal consolidation.
I was struck by how many times during the crisis I had to explain the “paradox of saving” and fight the Hoover-German line, “Reduce your budget deficit, keep your house in order, and don’t worry, the economy will be in good shape.” Anybody who argues along these lines must explain how it is consistent with the IS relation.
The demand for goods, in turn, depends on the rate at which people and firms can borrow (not the policy rate set by the central bank, more on this below) and on expectations of the future. John Maynard Keynes rightly insisted on the role of animal spirits. Uncertainty, pessimism, justified or not, decrease demand and can be largely self-fulfilling. Worries about future prospects feed back to decisions today. Such worries are probably the source of our slow recovery. (link is external) 
The Liquidity Preference/Money Supply (LM) Relation The LM relation, in its traditional formulation, is the relic of a time when central banks focused on the money supply rather than the interest rate. ... The reality is now different. Central banks think of the policy rate as their main instrument and adjust the money supply to achieve it. Thus, the LM equation must be replaced, quite simply, by the choice of the policy rate by the central bank, subject to the zero lower bound. ... This change had already taken place in the new Keynesian models; it should make its way into undergraduate texts.
Integrating the Financial System into Macro Models If anything, the crisis has shown the importance of the financial system for macroeconomics. Traditionally, the financial system was given short shrift in undergraduate macro texts. The same interest rate appeared in the IS and LM equations; in other words, people and firms were assumed to borrow at the policy rate set by the central bank. We have learned, dearly, that this is not the case and that things go very wrong.
The teaching solution, in my view, is to introduce two interest rates, the policy rate set by the central bank in the LM equation and the rate at which people and firms can borrow, which enters the IS equation, and then to discuss how the financial system determines the spread between the two. I see this as the required extension of the traditional IS-LM model. A simple model of banks showing the role of capital, on the one hand, and the role of liquidity, on the other, can do the trick. Many of the issues that dominated the crisis, from losses and low capital to liquidity runs can be discussed and integrated into the IS-LM model. With this extension, one can show both the effects of shocks on the financial system and the way in which the financial system modifies the effects of other shocks on the economy.
(Getting Rid of) Aggregate Demand–Aggregate Supply Turning to the supply side, the contraption known as the aggregate demand–aggregate supply model should be eliminated. It is clunky and, for good reasons, undergraduates find it difficult to understand. Its main point is to show how output naturally returns to potential with no change in policy, through a mechanism that appears marginally relevant in practice..
These difficulties are avoided if one simply uses a Phillips Curve (PC) relation to characterize the supply side. ...
Again, this way of discussing the supply side is already standard in more advanced presentations and the new Keynesian model (although the Calvo specification used in that model, as elegant as it is, is arbitrarily constraining and does not do justice to the facts). It is time to integrate it into the undergraduate model.
The IS-LM-Phillips Curve Model Put together, these modified IS, LM, and PC relations can do a good job of explaining recent and current events. For example, financial dislocations lead to a large spread between the borrowing and policy rates. The zero lower bound (or as we have learned, the slightly negative lower bound) prevents the central bank from decreasing the policy rate by enough to maintain demand. Output falls. Inflation decreases, potentially to the point where it turns into deflation, increasing real interest rates, and making it even harder to return to potential output.
One can go much further. ...
Macroeconomics is a tremendously exciting subject. Most of what we taught before the crisis remains highly relevant. But it needs some dusting and updating. My hope is that a model along the lines above can contribute to it.

Rethinking Macro Policy: Progress or Confusion?

Olivier Blanchard:

Rethinking Macro Policy: Progress or Confusion?: On April 15 and 16, 2015, the IMF hosted the third conference on “Rethinking Macroeconomic Policy.” I had initially chosen as the title and subtitle “Rethinking Macroeconomic Policy III. Down in the Trenches.” I thought of the first conference in 2011 as having identified the main failings of previous policies, the second conference in 2013 as having identified general directions, and this conference as a progress report. My subtitle was rejected by one of the co-organizers, Larry Summers. He argued that I was far too optimistic, that we were nowhere close to knowing where were going. Arguing with Larry is tough, so I chose an agnostic title and shifted to “Rethinking Macro Policy III: Progress or Confusion?” Where do I think we are today? I think both Larry and I are right. I do not say this for diplomatic reasons. We are indeed proceeding in the trenches. But where the trenches will eventually lead remains unclear. This is the theme I shall develop in these concluding remarks, focusing on macroprudential tools, monetary policy, and fiscal policy. ...

Brad DeLong responds: On this one--views of fiscal policy--put me down not for progress but for "confusion for $2000", Alex, for on this one I think the very sharp Olivier Blanchard has got it wrong. ...

Wednesday, April 06, 2016

How Network Effects Hurt Economies

“Networks and the Macroeconomy: An Empirical Exploration,” by Daron Acemoglu, Ufuk Akcigit, and William Kerr (this will be published in the NBER Macroeconomics Annual):

How Network Effects Hurt Economies, by Peter Dizikes, MIT News Office: When large-scale economic struggles hit a region, a country, or even a continent, the explanations tend to be big in nature as well.
Macroeconomists — who study large economic phenomena — often look for sweeping explanations of what has gone wrong, such as declines in productivity, consumer demand, or investor confidence, or significant changes in monetary policy.
But what if large-scale economic slumps can be traced to declines in relatively narrow industrial sectors? A newly published study co-authored by an MIT economist provides evidence that economic problems may often have smaller points of origin and then spread as part of a network effect.
“Relatively small shocks can become magnified and then become shocks you have to contend with [on a large scale],” says MIT economist Daron Acemoglu, one of the authors of a paper detailing the research.
The findings run counter to “real business cycle theory,” which became popular in the 1970s and holds that smaller, industry-specific effects tend to get swamped by larger, economy-wide trends.
More precisely, Acemoglu and his colleagues have found cases where industry-specific problems lead to six-fold declines in production across the U.S. economy as a whole. For example, for every dollar of value-added growth lost in the manufacturing industries because of competition from China, six dollars of value-added growth were lost in the U.S. economy as a whole.
The researchers also examined four different types of economic shocks to the U.S. economy that occurred over the years 1991-2009, and quantified the extent to which those problems spread “upstream” or “downstream” of the central industry in question — that is, whether the network effects more strongly hurt industrial suppliers or businesses that sell products and provide services to consumers.
All told, the researchers state in the paper, “Our results suggest that the transmission of various different types of shocks through economic networks and industry interlinkages could have first-order implications for the macroeconomy.” ...
Upstream or downstream
Acemoglu, Afcigit, and Kerr used manufacturing data from the National Bureau of Economic Analysis, and industry-specific data from the Bureau of Economic Analysis, to examine four economic shocks hitting the U.S. economy during that 1991-2009 period. Those were: the impact of export competition on U.S. manufacturing; changes in federal government spending, which affect areas such as defense manufacturing; changes in Total Factor Productivity; and variation in levels of patents coming from foreign industry.
As noted, the network effect of manufacturing competition with China made the overall economic shock about six times as great as it was to manufacturing alone. (This research built on previously published work by economists David Autor of MIT, David Dorn of the University of Zurich, and Gordon Hanson of the University of California at San Diego, sometimes in collaboration with Acemoglu and MIT graduate student Brendan Price.)
In studying changes in the levels of federal spending after 1992, the researchers found a network effect about three to five times as large as that on directly-affected firms alone.
The decline in Total Factor Productivity constituted a smaller economic shock but one with a larger network effect, of more than 15 times the initial impact. In the case of increased foreign patenting (another way of looking at corporate productivity), the researchers found a network effect similar to that of Total Factor Productivity.
The first two of these areas constitute demand-side shocks, affecting consumer demand for the products in question. The last two are supply-side shocks, affecting firms’ ability to be good at what they do.
One of the key findings of the study, which confirms and builds on existing theory, is that demand-side shocks spread almost exclusively “upstream” in economic networks, and supply-side shocks spread almost exclusively “downstream.” To see why, Acemoglu suggests, consider an auto manufacturer, which has parts suppliers upstream and is linked with auto dealers, repair shops, and other businesses downstream.
When auto demand drops, “It’s the suppliers [upstream] that get affected,” Acemoglu explains. “You’re going to cut the production of autos, and you buy less of each of the inputs,” or supplies.
Now suppose the supply changes, perhaps due to an increase in manufacturing efficiency, which makes cars cheaper. In that case, Acemoglu adds, “People who use auto as inputs will buy more of them” — picture a delivery company — “so that shock will get transmitted to the downstream industries.”
To be sure, it is widely understood that the auto industry, like almost every other industry, is situated within a larger economic network. Yet estimating the spillover effects of struggles within any given industry, in the quantitative form of the current study, is rarely done.
“Given the importance of this, it’s surprising how scant the evidence is,” Acemoglu says. ...
This could have policy implications: Proponents of government investment, such as the so-called stimulus bill of 2009, the American Recovery and Reinvestment Act, have contended that government spending creates a “multiplier effect” in terms of growth. Opponents of such legislation sometimes assert that government spending crowds out private investment and thus does not generate more growth than would otherwise occur. In theory, a more granular understanding of these network effects could help describe and define what a multiplier effect is, and in which industrial areas it may be the most pronounced. ...

Saturday, March 26, 2016

Reflections on Macroeconomics Then and Now

Stanley Fischer:

Reflections on Macroeconomics Then and Now: I am grateful to the National Association for Business Economics (NABE) for conferring the fourth annual NABE Paul A. Volcker Lifetime Achievement Award for Economic Policy on me, thereby allowing me the honor of following in the footsteps of Paul Volcker, Jean-Claude Trichet, and Alice Rivlin.1 The honor of receiving the award is enhanced by its bearing the name of Paul Volcker, a model citizen and public servant, and a giant in every sense among central bankers.

One thinks of many things on an occasion such as this one. My mind goes back first to growing up in a very small town in Zambia, then Northern Rhodesia, and to the surprise and delight my parents would have felt at seeing me standing where I am now. They would have been even more delighted that my girlfriend, Rhoda, whom I met when my parents moved to a bigger town in Zimbabwe, and I have been happily married for 50 years. But that is not the story I will tell today. Rather, I want to talk about our field, macroeconomics, and some of the lessons we have learned in the course of the last 55 years--and I say 55 years, because in 1961, at the end of my school years, on the advice of a friend, I read Keynes's General Theory for the first time.

Did I understand it? Certainly not. Was I captivated by it? Certainly, though "captured" is a more appropriate word than "captivated." Does it remain relevant? Certainly. Just a week ago I took it off the bookshelf to read parts of chapter 23, "Notes on Mercantilism, the Usury Laws, Stamped Money and Theories of Under-Consumption." Today that chapter would be headed "Protectionism, the Zero Lower Bound, and Secular Stagnation," with the importance of usury laws having diminished since 1936.

There is an old joke about our field--not the one about the one-handed economist, nor the one about "assume you have a can opener," nor the one that ends, "If I were you, I wouldn't start from here." Rather it's the one about the Ph.D. economist who returns to his university for his class's 50th reunion. He asks if he can see the most recent Ph.D. generals exam. After a while it is brought to him. He reads it carefully, looking perplexed, and then says, "But this is exactly the same as the exam I wrote over 50 years ago." "Ah yes," says the professor. "It is the same, but all the answers are different."

Is that really the case? Not really, though it is true to some extent in the realm of policy. To discuss the question of whether the answers to the questions of how to deal with macroeconomic policy problems have changed markedly over the past half-century or so, I will start by briefly sketching the structure of a basic macro model. The building blocks of this model are similar to those used in many macro models, including FRB/US, the Fed staff's large-scale model, and a variety of DSGE (dynamic stochastic general equilibrium) models used at the Fed and other central banks and by academic researchers.

The structure of the model starts with the standard textbook equation for aggregate demand for domestically produced goods, namely:2

  1. AD = C + I + G + NX;
  2. Next is the wage-price block, which is based on a wage or price Phillips curve. Okun's law is included to make the transition between output and employment;
  3. Monetary policy is described by a money supply or interest rate rule;
  4. The credit markets and financial intermediation are built off links between the policy interest rate and the rates of return on, and/or demand and supply functions for, other assets;
  5. The balance of payments and the exchange rate enter through the balance of payments identity, namely that the current account surplus must be equal to the capital account deficit, corrected for official intervention;
  6. Dynamics of stocks: There are dynamic equations for the capital stock, the stock of government debt, and the external debt.

When I was an undergraduate at the London School of Economics (LSE) between 1962 and 1965, we learned the IS-LM model, which combined the aggregate demand equation (1) with the money market equilibrium condition set out in (3). That was the basic understanding of the Keynesian model as crystallized by John Hicks, Franco Modigliani, and others, in which it was easy to add detail to the demand functions for private-sector consumption, C; for investment, I; for government spending, G; and for net exports. The Keynesian emphasis on aggregate demand and its determinants is one of the basic innovations of the Keynesian revolution, and one that makes it far easier to understand and explain what factors are determining output and employment.

Continuing down the list, on price and wage dynamics, the Phillips curve has flattened somewhat since the 1950s and 1960s.3 Further, the role of expectations of inflation in the Phillips curve has been developed far beyond what was understood when A.W. Phillips--who was a New Zealander, an LSE faculty member, and a statistician and former engineer--discovered what later became the Phillips curve. The difference between the short- and long-run Phillips curves, which is now a staple of textbooks, was developed by Milton Friedman and Edmund Phelps, and the effect of making expectations rational or model consistent was emphasized by Robert Lucas, whose islands model provided an imperfect information reason for a nonvertical short-run Phillips curve. In Okun's law, the Okun coefficient--the coefficient specifying how much a change in the unemployment rate affects output--appears to have declined over time. So has the trend rate of productivity growth, which is a critical determinant of future levels of per capita income.

In (3), the monetary equilibrium condition, the monetary policy decision was typically represented by the money stock at the LSE and perhaps also at the Massachusetts Institute of Technology (MIT) after the Keynesian revolution (after all, "L" represents the liquidity preference function and "M" the supply of money); now the money supply rule is replaced by an interest-rate setting rule, for instance a reaction function of some form, or by a calculated "optimal" policy based on a loss function.

The development of the flexible inflation-targeting approach to monetary policy is one of the major achievements of modern macroeconomics. Flexible inflation targeting allows for flexibility in the speed with which the monetary authority plans on returning to the target inflation rate, and is thereby close to the dual mandate that the law assigns to the Fed.

A great deal of progress has been made in developing the credit and financial intermediation block. As early as the 1960s, each of James Tobin, Milton Friedman, and Karl Brunner and Alan Meltzer wrote out models with more fully explicated financial sectors, based on demand functions for assets other than money. Later the demand functions were often replaced by pricing equations derived from the capital asset pricing model. Researchers at the Fed have been bold enough to add estimated term and risk premiums to the determination of the returns on some assets.4 They have concluded, inter alia, that the arguments we used to make about how easy it would be to measure expected inflation if the government would introduce inflation-indexed bonds failed to take into account that returns on bonds are affected by liquidity and risk premiums. This means that one of the major benefits that were expected from the introduction of inflation-indexed bonds (Treasury Inflation-Protected Securities, generally called TIPS), namely that they would provide a quick and reliable measure of inflation expectations, has not been borne out, and that we still have to struggle to get reasonable estimates of expected inflation.

As students, we included NX, net exports, in the aggregate demand equation, but we did not generally solve for the exchange rate, possibly because the exchange rate was typically fixed. Later, in 1976, Rudi Dornbusch inaugurated modern international macroeconomics--and here I'm quoting from a speech by Ken Rogoff--in his famous overshooting model.5 As globalization of both goods and asset markets intensified over the next 40 years, the international aspects of trade in goods and assets occupied an increasingly important role in the economies of virtually all countries, not least the United States, and in macroeconomics.

At the LSE, we took a course on the British economy from Frank Paish, whose lectures consisted of a series of charts, accompanied by narrative from the professor. He made a strong impression on me in a lecture in 1963, in which he said, "You see, it (the balance of payments deficit) goes up and it goes down, and it is clear that we are moving toward a balance of payments crisis in 1964." I waited and I watched, and the crisis appeared on schedule, as predicted. But Paish also warned us that forecasting was difficult, and gave us the advice "Never look back at your forecasts--you may lose your nerve." I pass that wisdom on to those of you who need it.

I remember also my excitement at being told by a friend in a more senior class about the existence of econometric models of the entire economy. It was a wonderful moment. I understood that economic policy would from then on be easy: All that was necessary was to feed the data into the model and work out at what level to set the policy parameters. Unfortunately, it hasn't worked out that way. On the use of econometric models, I think often of something Paul Samuelson once said: "I'd rather have Bob Solow's views than the predictions of a model. But I'd rather have Solow with a model than without one."

We learned a lot at the LSE. But wonderful as it was to be in London, and to meet people from all over the world for the first time, and to be able to travel to Europe and even to the Soviet Union with a student group, and to ski for the first time in my life in Austria, it gradually became clear to me that the center of the academic economics profession was not in London or Oxford or Cambridge, but in the United States.

There was then the delicate business of applying to graduate school. There was a strong Chicago tendency among many of the lecturers at the LSE, but I wanted to go to MIT. When asked why, I gave a simple answer: "Samuelson and Solow." Fortunately, I got into MIT and had the opportunity of getting to know Samuelson and Solow and other great professors. And I also met the many outstanding students who were there at the time, among them Robert Merton. I took courses from Samuelson and Solow and other MIT stars, and I wrote my thesis under the guidance of Paul Samuelson and Frank Fisher. From there, my first job was at the University of Chicago--and I understood that I was very lucky to have been able to learn from the great economists at both MIT and Chicago. Among the many things I learned at Chicago was a Milton Friedman saying: "Man may not be rational, but he's a great rationalizer," which is a quote that often comes to mind when listening to stock market analysts.

After four years at Chicago, I returned to the MIT Department of Economics, and thought that I would never leave--even more so when MIT succeeded in persuading Rudi Dornbusch, whom I had met when he was a student at Chicago, to move to MIT--thus giving him too the benefit of having learned his economics at both Chicago and MIT, and giving MIT the pleasure and benefit of having added a superb economist and human being to the collection of such people already present.

MIT was still heavily involved in developing growth theory at the time I was a Ph.D. student there, from 1966 to 1969. We students were made aware of Kaldor's stylized facts about the process of growth, presented in his 1957 article "A Model of Economic Growth." They were:

  1. The shares of national income received by labor and capital are roughly constant over long periods of time.
  2. The rate of growth of the capital stock per worker is roughly constant over long periods of time.
  3. The rate of growth of output per worker is roughly constant over long periods of time.
  4. The capital/output ratio is roughly constant over long periods of time.
  5. The rate of return on investment is roughly constant over long periods of time.
  6. The real wage grows over time.

Well, that was then, and many of the problems we face in our economy now relate to the changes in the stylized facts about the behavior of the economy: Every one of Kaldor's stylized facts is no longer true, and unfortunately the changes are mostly in a direction that complicates the formulation of economic policy.6

While the basic approach outlined so far remains valid, and can be used to address many macroeconomic policy issues, I would like briefly to take up several topics in more detail. Some of them are issues that have remained central to the macroeconomic agenda over the past 50 years, some have to my regret fallen off the agenda, and others are new to the agenda.

  1. Inflation and unemployment: Estimated Phillips curves appear to be flatter than they were estimated to be many years ago--in terms of the textbooks, Phillips curves appear to be closer to what used to be called the Keynesian case (flat Phillips curve) than to the classical case (vertical Phillips curve). Since the U.S. economy is now below our 2 percent inflation target, and since unemployment is in the vicinity of full employment, it is sometimes argued that the link between unemployment and inflation must have been broken. I don't believe that. Rather the link has never been very strong, but it exists, and we may well at present be seeing the first stirrings of an increase in the inflation rate--something that we would like to happen.
  2. Productivity and growth: The rate of productivity growth in the United States and in much of the world has fallen dramatically in the past 20 years. The table shows calculated rates of annual productivity growth for the United States over three periods: 1952 to 1973; 1974 to 2007; and the most recent period, 2008 to 2015. After having been 3 percent and 2.1 percent in the first two periods, the annual rate of productivity growth has fallen to 1.2 percent in the period since the start of the global financial crisis.

    The right guide to thinking in this case is given by a famous Herbert Stein line: "The difference between a growth rate of 1 percent and 2 percent is 100 percent." Why? Productivity growth is a major determinant of long-term growth. At a 1 percent growth rate, it takes income 70 years to double. At a 2 percent growth rate, it takes 35 years to double. That is to say, that with a growth rate of 1 percent per capita, it takes two generations for per capita income to double; at a 2 percent per capita growth rate, it takes one generation for per capita income to double. That is a massive difference, one that would very likely have severe consequences for the national mood, and possibly for economic policy. That is to say, there are few issues more important for the future of our economy, and those of every other country, than the rate of productivity growth.

    At this stage, we simply do not know what will happen to productivity growth. Robert Gordon of Northwestern University has just published an extremely interesting and pessimistic book that argues we will have to accept the fact that productivity will not grow in future at anything like the rates of the period before 1973. Others look around and see impressive changes in technology and cannot believe that productivity growth will not move back closer to the higher levels of yesteryear.7 A great deal of work is taking place to evaluate the data, but so far there is little evidence that data difficulties account for a significant part of the decline in productivity growth as calculated by the Bureau of Labor Statistics.8

  3. The ZLB and the effectiveness of monetary policy: From December 2008 to December 2015, the federal funds rate target set by the Fed was a range of 0 to 1/4 percent, a range of rates that was described as the ZLB (zero lower bound).9 Between December 2008 and December 2014, the Fed engaged in QE--quantitative easing--through a variety of programs. Empirical work done at the Fed and elsewhere suggests that QE worked in the sense that it reduced interest rates other than the federal funds rate, and particularly seems to have succeeded in driving down longer-term rates, which are the rates most relevant to spending decisions.

    Critics have argued that QE has gradually become less effective over the years, and should no longer be used. It is extremely difficult to appraise the effectiveness of a program all of whose parameters have been announced at the beginning of the program. But I regard it as significant with respect to the effectiveness of QE that the taper tantrum in 2013, apparently caused by a belief that the Fed was going to wind down its purchases sooner than expected, had a major effect on interest rates.

    More recently, critics have argued that QE, together with negative interest rates, is no longer effective in either Japan or in the euro zone. That case has not yet been empirically established, and I believe that central banks still have the capacity through QE and other measures to run expansionary monetary policies, even at the zero lower bound.

  4. The monetary-fiscal policy mix: There was once a great deal of work on the optimal monetary-fiscal policy mix. The topic was interesting and the analysis persuasive. Nonetheless the subject seems to be disappearing from the public dialogue; perhaps in ascendance is the notion that--except in extremis, as in 2009--activist fiscal policy should not be used at all. Certainly, it is easier for a central bank to change its policies than for a Treasury or Finance Ministry to do so, but it remains a pity that the fiscal lever seems to have been disabled.
  5. The financial sector: Carmen Reinhart and Ken Rogoff's book, This Time Is Different, must have been written largely before the start of the great financial crisis. I find their evidence that a recession accompanied by a financial crisis is likely to be much more serious than an ordinary recession persuasive, but the point remains contentious. Even in the case of the Great Recession, it is possible that the U.S. recession got a second wind when the euro-zone crisis worsened in 2011. But no one should forget the immensity of the financial crisis that the U.S. economy and the world went through following the bankruptcy of Lehman Brothers--and no one should forget that such things could happen again.

    The subsequent tightening of the financial regulatory system under the Dodd-Frank Act was essential, and the complaints about excessive regulation and excessive demands for banks to hold capital betray at best a very short memory. We, the official sector and particularly the regulatory authorities, do have an obligation to try to minimize the regulatory and other burdens placed on the private sector by the official sector--but we have a no less important obligation to try to prevent another financial crisis. And we should also remember that the shadow banking system played an important role in the propagation of the financial crisis, and endeavor to reduce the riskiness of that system.
  6. The economy and the price of oil: For some time, at least since the United States became an oil importer, it has been believed that a low price of oil is good for the economy. So when the price of oil began its descent below $100 a barrel, we kept looking for an oil-price-cut dividend. But that dividend has been hard to discern in the macroeconomic data. Part of the reason is that as a result of the rapid expansion of the production of oil from shale, total U.S. oil production had risen rapidly, and so a larger part of the economy was adversely affected by the decline in the price of oil. Another part is that investment in the equipment and structures needed for shale oil production had become an important component of aggregate U.S. investment, and that component began a rapid decline. For these reasons, although the United States has remained an oil importer, the decrease in the world price of oil had a mixed effect on U.S. gross domestic product. There is reason to believe that when the price of oil stabilizes, and U.S. shale oil production reaches its new equilibrium, the overall effect of the decline in the price of oil will be seen to have had a positive effect on aggregate demand in the United States, since lower energy prices are providing a noticeable boost to the real incomes of households.
  7. Secular stagnation: During World War II in the United States, many economists feared that at the end of the war, the economy would return to high pre-war levels of unemployment--because with the end of the war, demobilization, and the massive reduction that would take place in the defense budget, there would not be enough demand to maintain full employment.

    Thus was born or renewed the concept of secular stagnation--the view that the economy could find itself permanently in a situation of low demand, less than full employment, and low growth.10 That is not what happened after World War II, and the thought of secular stagnation was correspondingly laid aside, in part because of the growing confidence that intelligent economic policies--fiscal and monetary--could be relied on to help keep the economy at full employment with a reasonable growth rate.

    Recently, Larry Summers has forcefully restated the secular stagnation hypothesis, and argued that it accounts for the current slowness of economic growth in the United States and the rest of the industrialized world. The theoretical case for secular stagnation in the sense of a shortage of demand is tied to the question of the level of the interest rate that would be needed to generate a situation of full employment. If the equilibrium interest rate is negative, or very small, the economy is likely to find itself growing slowly, and frequently encountering the zero lower bound on the interest rate.

    Research has shown a declining trend in estimates of the equilibrium interest rate. That finding has become more firmly established since the start of the Great Recession and the global financial crisis.11 Moreover, the level of the equilibrium interest rate seems likely to rise only gradually to a longer-run level that would still be quite low by historical standards.

    What factors determine the equilibrium interest rate? Fundamentally, the balance of saving and investment demands. Several trends have been cited as possible factors contributing to a decline in the long-run equilibrium real rate. One likely factor is persistent weakness in aggregate demand. Among the many reasons for that, as Larry Summers has noted, is that the amount of physical capital that the revolutionary information technology firms with high stock market valuations have needed is remarkably small. The slowdown of productivity growth, which as already mentioned has been a prominent and deeply concerning feature of the past six years, is another important factor.12 Others have pointed to demographic trends resulting in there being a larger share of the population in age cohorts with high saving rates.13 Some have also pointed to high saving rates in many emerging market countries, coupled with a lack of suitable domestic investment opportunities in those countries, as putting downward pressure on rates in advanced economies--the global savings glut hypothesis advanced by Ben Bernanke and others at the Fed about a decade ago.14

    Whatever the cause, other things being equal, a lower level of the long-run equilibrium real rate suggests that the frequency and duration of future episodes in which monetary policy is constrained by the ZLB will be higher than in the past. Prior to the crisis, some research suggested that such episodes were likely to be relatively infrequent and generally short lived.15 The past several years certainly require us to reconsider that basic assumption. Moreover, recent experience in the United States and other countries has taught us that conducting monetary policy at the effective lower bound is challenging.16 And while unconventional policy tools such as forward guidance and asset purchases have been extremely helpful and effective, all central banks would prefer a situation with positive interest rates, restoring their ability to use the more traditional interest rate tool of monetary policy.17

    The answer to the question "Will the equilibrium interest rate remain at today's low levels permanently?" is also that we do not know. Many of the factors that determine the equilibrium interest rate, particularly productivity growth, are extremely difficult to forecast. At present, it looks likely that the equilibrium interest rate will remain low for the policy-relevant future, but there have in the past been both long swings and short-term changes in what can be thought of as equilibrium real rates.

    Eventually, history will give us the answer. But it is critical to emphasize that history's answer will depend also on future policies, monetary and other, notably including fiscal policy.

Concluding Remarks
Well, are the answers all different than they were 50 years ago? No. The basic framework we learned a half-century ago remains extremely useful. But also yes: Some of the answers are different because they were not on previous exams because the problems they deal with were not evident fifty years ago. So the advice to potential policymakers is simple: Learn as much as you can, for most of it will come in useful at some stage of your career; but never forget that identifying what is happening in the economy is essential to your ability to do your job, and for that you need to keep your eyes, your ears, and your mind open, and with regard to your mouth--to use it with caution.

Many thanks again for this award and this opportunity to speak with you.

References
Bernanke, Ben S. (2005). "The Global Saving Glut and the U.S. Current Account Deficit," speech delivered at the Homer Jones Lecture, St. Louis, April 14.

Blanchard, Olivier (2014). "Where Danger Lurks: The Recent Financial Crisis Has Taught Us to Pay Attention to Dark Corners, Where the Economy Can Malfunction Badly," Finance and Development, vol. 51 (September), pp. 28-31.

-------- (2016). "The U.S. Phillips Curve: Back to the 60s? (PDF)" Policy Brief 16-1. Washington: Peterson Institute for International Economics, January.

Blanchard, Olivier, Eugenio Cerutti, and Lawrence Summers (2015). "Inflation and Activity--Two Explorations and Their Monetary Policy Implications (PDF)," IMF Working Paper WP/15/230. Washington: International Monetary Fund, November.

Blanchard, Olivier, and John Simon (2001). "The Long and Large Decline in U.S. Output Volatility (PDF)," Brookings Papers on Economic Activity, 1, pp. 135-74.

Brunner, Karl, and Allan H. Meltzer (1972). "Money, Debt, and Economic Activity," Journal of Political Economy, vol. 80 (September-October), pp.951-77.

Byrne, David M., John G. Fernald, and Marshall Reinsdorf (forthcoming). "Does the United States Have a Productivity Problem or a Measurement Problem?" Brookings Papers on Economic Activity.

Caballero, Ricardo J., Emmanuel Farhi, and Pierre-Olivier Gourinchas (2008). "An Equilibrium Model of 'Global Imbalances' and Low Interest Rates," American Economic Review, vol. 98 (1), pp. 358-93.

Daly, Mary C., John G. Fernald, Òscar Jordà, and Fernanda Nechio (2014). "Output and Unemployment Dynamics (PDF)," Working Paper Series 2013-32. San Francisco: Federal Reserve Bank of San Francisco, November.

-------- (2014). "Interpreting Deviations from Okun's Law," FRBSF Economic Letter 2014-12. San Francisco: Federal Reserve Bank of San Francisco.

D'Amico, Stefania, Don H. Kim, and Min Wei (2014). "Tips from TIPS: The Informational Content of Treasury Inflation-Protected Security Prices (PDF)," Finance and Economics Discussion Series 2014-24. Washington: Board of Governors of the Federal Reserve System, January.

Dornbusch, Rudiger (1976). "Expectations and Exchange Rate Dynamics," Journal of Political Economy, vol. 84 (December), pp. 1161-76.

Dornbusch, Rudiger, Stanley Fischer, and Richard Startz (2014). Macroeconomics, 12th ed. New York: McGraw-Hill Education.

Fischer, Stanley (forthcoming). "Monetary Policy, Financial Stability, and the Zero Lower Bound," American Economic Review (Papers and Proceedings).

Friedman, Milton (1968). "The Role of Monetary Policy," American Economic Review, vol. 58 (March), pp. 1-17.

Gordon, Robert J. (2014). "The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections," NBER Working Paper Series 19895. Cambridge, Mass.: National Bureau of Economic Research, February.

-------- (2016). The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War. Princeton, N.J.: Princeton University Press.

Hall, Robert E. (2014). "Quantifying the Lasting Harm to the U.S. Economy from the Financial Crisis," in Jonathan Parker and Michael Woodford, eds., NBER Macroeconomics Annual 2014, vol. 29. Chicago: University of Chicago Press.

Hamilton, James D., Ethan S. Harris, Jan Hatzius, and Kenneth D. West (2015). "The Equilibrium Real Funds Rate: Past, Present and Future," NBER Working Paper Series 21476. Cambridge, Mass.: National Bureau of Economic Research, August.

Hicks, John R. (1937). "Mr. Keynes and the 'Classics': A Suggested Interpretation," Econometrica, vol. 5 (April), pp. 147-59.

Johannsen, Benjamin K., and Elmar Mertens (2016). "The Expected Real Interest Rate in the Long Run: Time Series Evidence with the Effective Lower Bound," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, February 9.

Jones, Charles I., and Paul M. Romer (2010). "The New Kaldor Facts: Ideas, Institutions, Population, and Human Capital," American Economic Journal: Macroeconomics, vol. 2 (January), pp. 224-45.

Kaldor, Nicholas (1957). "A Model of Economic Growth," Economic Journal, vol. 67 (December), pp. 591-624.

Keynes, John Maynard (1936). The General Theory of Employment, Interest and Money. London: Macmillan.

Kiley, Michael T. (2015). "What Can the Data Tell Us about the Equilibrium Real Interest Rate? (PDF)" Finance and Economics Discussion Series 2015-077. Washington: Board of Governors of the Federal Reserve System, August.

Knotek, Edward S., II (2007). "How Useful Is Okun's Law? (PDF)" Federal Reserve Bank of Kansas City, Economic Review, Fourth Quarter, pp. 73-103.

Laubach, Thomas, and John C. Williams (2003). "Measuring the Natural Rate of Interest," Review of Economics and Statistics, vol. 85 (November), pp. 1063-70.

Lucas, Robert E., Jr. (1972). "Expectations and the Neutrality of Money," Journal of Economic Theory, vol. 4 (April), pp. 103-24.

Mendoza, Enrique G., Vincenzo Quadrini, and José-Víctor Ríos-Rull (2009). "Financial Integration, Financial Development, and Global Imbalances," Journal of Political Economy, vol. 117 (3), pp. 371-416.

Modigliani, Franco (1944). "Liquidity Preference and the Theory of Interest and Money," Econometrica, vol. 12 (January), pp. 45-88.

Mokyr, Joel, Chris Vickers, and Nicolas L. Ziebarth (2015). "The History of Techonological Anxiety and the Future of Economic Growth: Is This Time Different?" Journal of Economic Perspectives, vol. 29 (Summer), pp. 31-50.

Obstfeld, Maurice, and Kenneth Rogoff (1996). Foundations of International Macroeconomics. Cambridge, Mass.: MIT Press.

Okun, Arthur M. (1962). "Potential GNP: Its Measurement and Significance," Proceedings of the Business and Economics Statistics Section of the American Statistical Association, pp. 98-104.

Phelps, Edmund S. (1967). "Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time," Economica, vol. 34 (August), pp. 254-81.

Reifschneider, David, and John C. Williams (2000). "Three Lessons for Monetary Policy in a Low-Inflation Era," Journal of Money, Credit, and Banking, vol. 32 (November), pp. 936-66.

Reinhart, Carmen M., and Kenneth S. Rogoff (2009). This Time Is Different: Eight Centuries of Financial Folly. Princeton, N.J.: Princeton University Press.

Rogoff, Kenneth (2001). "Dornbusch's Overshooting Model after Twenty-Five Years (PDF)," speech delivered at the Mundell-Fleming Lecture, Second Annual Research Conference, International Monetary Fund, Washington, November 30 (revised January 22, 2002).

Solow, Robert M. (2004). "Introduction: The Tobin Approach to Monetary Economics," Journal of Money, Credit, and Banking, vol. 36 (August), pp. 657-63.

Stock, James H., and Mark W. Watson (2003). "Has the Business Cycle Changed and Why?" NBER Macroeconomics Annual 2002, vol. 17 (January).

Tobin, James (1969). "A General Equilibrium Approach to Monetary Theory (PDF)," Journal of Money, Credit, and Banking, vol. 1 (February), pp. 15-29.

U.S. Executive Office of the President, Council of Economic Advisors (2015). Long-Term Interest Rates: A Survey (PDF). Washington: EOP.

Williams, John C. (2013). "A Defense of Moderation in Monetary Policy (PDF)," Working Paper Series 2013-15. San Francisco: Federal Reserve Bank of San Francisco, July.

Woodford, Michael (2010). "Financial Intermediation and Macroeconomic Analysis," Journal of Economic Perspectives, vol. 24 (Fall), pp. 21-44.


1. I am grateful to David Lopez-Salido, Andrea Ajello, Elmar Mertens, Stacey Tevlin, and Bill English of the Federal Reserve Board for their assistance. Views expressed are mine, and are not necessarily those of the Federal Reserve Board or the Federal Open Market Committee.

2. A fuller description of the equations is contained in the appendix.

3. See Blanchard (2016).

4. See D'Amico, Kim, and Wei (2014).

5. See Dornbusch (1976) and Rogoff (2001).

6. See Jones and Romer (2010).

7. See, for instance, Mokyr, Vickers, and Ziebarth (2015).

8. See Byrne, Fernald, and Reinsdorf (forthcoming).

9. Inside the Fed, the range of 0 to 1/4 percent is generally called the ELB, the effective lower bound.

10. I am distinguishing in this section between secular stagnation as being caused by a deficiency of aggregate demand and another view, that output growth will be very slow in future because productivity growth will be very low. The view that future productivity growth will be very low has already been discussed, with the conclusion that we do not have a good basis for predictions of its future level, and that we simply do not know whether future productivity growth will be extremely low or higher than it has been recently. There is no shortage of views on this issue among economists, but the views to some extent appear to depend on whether the economist making the prediction is an optimist or a pessimist.

11. This research includes recent work by Johannsen and Mertens (2015) and Kiley (2015) that uses extensions of the original Laubach and Williams (2003) framework. An international perspective on medium-to-long-run real interest rates is provided by U.S. Executive Office of the President (2015). Reinhart and Rogoff (2009) and Hall (2014) discuss the long-lived effects of financial crises on economic performance. See also Hamilton and others (2015). I have, in addition, drawn on Fischer (forthcoming).

12. It is also a major factor explaining the phenomenon of the economy's impressive performance on the jobs front during a period of historically slow growth.

13. See, for instance, Gordon (2014, 2016).

14. See Bernanke (2005). See also the recent work by Caballero, Farhi, and Gourinchas (2008); and Mendoza, Quadrini, and Rios-Rull (2009).

15. See, for instance, Reifschneider and Williams (2000), Blanchard and Simon (2001), and Stock and Watson (2003).

16. For a discussion of various issues reviewed by the Federal Open Market Committee in late 2008 and 2009 regarding the complications of unconventional monetary policy at the ZLB, see the set of staff memos on the Board's website.

17. See Williams (2013).

Tuesday, March 22, 2016

'MMT and Mainstream Macro'

Simon Wren-Lewis:

MMT and mainstream macro: There were a lot of interesting and useful comments on my last post on MMT, plus helpful (for me) follow-up conversations. Many thanks to everyone concerned for taking the time. Before I say anything more let me make it clear where I am coming from. I’m on the same page as far as policy’s current obsession with debt is concerned. Where I seem to differ from some who comment on my blog, people who say they are following MMT, is whether you need to be concerned about debt when monetary policy is not constrained by the Zero Lower Bound. I say yes, they say no, but for reasons I could not easily understand.
This was the point of the ‘nothing new’ comment. It was not meant to be a put down. It was meant to suggest that a mainstream economist like myself could come to some of the same conclusions as MMT writers, and more to the point, just because I was a mainstream economist does not mean I misunderstood how government financing works. It was because I was getting comments from MMT followers that seemed nonsensical to me, but which should not have been nonsensical because the basics of MMT are understandable using mainstream theory. ...
What mainstream theory says is that some combination of monetary and fiscal policy can always end a recession caused by demand deficiency. Full stop: no ifs or buts. That is why we had fiscal expansion in 2009 in the US, UK, Germany, China and elsewhere. The contribution of some influential mainstream economists to this switch from fiscal stimulus to austerity in 2010 was minor at most, and to imagine otherwise does nobody any favours. The fact that policymakers went against basic macro theory tells us important things about the transmission mechanism of economic knowledge, which all economists have to address.

Brad DeLong:

Yes, Expansionary Fiscal Policy in the North Atlantic Would Solve Many of Our Problems. Why Do You Ask?: ... In my view, the economics of Abba Lerner—what is now called MMT—is not always right: It is not always possible for the government to spend freely to attain full employment, use monetary policy to keep the debt under control, and rely on rising inflation as the only signal needed of whether and when policy needs to be tightened. Why not? Because it is possible that the bond market can get itself into an unsustainable position, in which underlying inflationary pressures are masked until it is too late to rebalance government finances without a financial crisis.
But, in my view, right now the economics of Abba Lerner is 100% correct. The U.S. (and Europe!) should use expansionary fiscal policy to rebalance the economy at full employment and potential output. And interest rates are so low that doing so does not require any additional monetary policy steps to keep the debt under control.
Japan, alas, confronts us with a difficult and much more devilish program of economic policy. Partial and nearly painless debt repudiation via inflation and financial repression seems to me to be the best way forward—if that can be attained. But more on that anon.

Tuesday, February 02, 2016

Economics is Changing

Not much out there to excerpt and blog, so I threw down a few thoughts for you to tear apart:

I hear frequently that economics needs to change, and it has, at least in the questions we ask. Twenty years go, the dominant conversation in economics was about the wonder of markets. We needed to free the banking system from regulations so it could do its important job of turning saving into productive investment unfettered by government interference. Trade barriers needed to come down to make everyone better off. There was little need to worry about monopoly power, markets are contestable and the problem will take care of itself. Unions simply get in the way of our innovative, dynamic economy and needed to be broken so the market could do its thing and make everyone better off. Inequality was a good thing, it created the right incentives for people to work hard and try to get ahead, and the markets would ensure that everyone, from CEOs on down, would be paid according to their contribution to society. The problem wasn't that the markets somehow distributed goods unfairly, or at least in a way that is at odds with marginal productivity theory, it was that some workers lacked the training to reap higher rewards. We simply needed to prepare people better to compete in modern, global markets, there was nothing fundamentally wrong with markets themselves. The move toward market fundamentalism wasn't limited to Republicans, Democrats joined in too.

That view is changing. Inequality has burst onto the economics research scene. Is rising inequality an inevitable feature of capitalism? Does the system reward people fairly? Can inequality actually inhibit economic growth? Not so long ago, the profession ignored these questions. Similarly for the financial sector. The profession has moved from singing the praises of the financial system and its ability to channel savings into the most productive investments to asking whether the financial sector produces as much value for society as was claimed in the past. We now ask whether banks are too big and powerful, whereas in the past that size was praised as a sign of how super-sized banks can do super-sized things for the economy, and compete with banks around the world. We have gone from saying that the shadow banking system can self regulate as it provides important financial services to homeowners and businesses to asking what types of regulation would be best. Economists used to pretty much ignore the financial sector altogether. It was a black box that simply turned S (saving) into I (investment), and did so efficiently, and there was no need to get into the details. Our modern financial system couldn't crash like those antiquated systems that were around during and before the Great Depression. There was no need to include it in our macro models, at least not in any detail, or even ask questions about what might happen if there was a financial crisis.

There are other changes too. Economists now question whether markets reward labor according to changes in productivity. Why is it that wages have stagnated even as worker productivity has gone up? Is it because bargaining power is asymmetric in labor markets, with firms having the advantage? What's the best way to elevate the working class? In the past, an argument was made that the best way to help everyone is to cut taxes for the wealthy, and all the great things they would do with the extra money and the incentives that tax cuts bring would trickle down and help the working class. That didn't happen and although there are still echoes of this argument on the political right, the questions have certainly changed. Much of the current research agenda in economics is devoted to understanding why wage income has stagnated for most people, and how to fix it. We've moved beyond "technology is the problem and better education is the answer" to asking whether the market system itself, and the market failures that come with it (including political influence over policy), has something to do with this outcome.

Fiscal policy is another example of change within the profession. Twenty years ago, nobody, well hardly anyone, was doing research on the impact of fiscal policy and its use as a countercyclical policy instrument. All of the focus was on monetary policy. Fiscal policy would only be needed in a severe recession, and that wouldn't happen in our modern economy, and in any case it wouldn't work (not everyone believed fiscal policy was ineffective, but many did). That has changed. Fiscal policy is now an integral component of many modern DSGE models, and -- surprise -- the models do not tell us fiscal policy is ineffective. Quite the opposite, it works well in deep recessions (though near full employment its effectiveness wanes).

Monetary policy has also come under scrutiny. In the past, the Taylor rule was praised as responsible for the Great Moderation. We had discovered the key to a stable economy. But the Great Recession changed that. We now wonder if other policy rules might serve as a better guidepost (e.g. nominal GDP targeting), we ask about negative interest rates, unconventional policy, all sorts of questions that were hardly asked or even imagined not so long ago. We wonder about regulation of the financial sector, and how to do it correctly (in the past, it was about how to remove regulations correctly).

I don't mean to suggest that economics is now on the right track. The old guard is still there, and still influential. But it's hard to deny that the questions we are asking have gone through a considerable evolution since the onset of the recession, and when questions change, new models and new tools are developed to answer them. The models do not come first -- models aren't built in search of questions, models are built to answer questions -- and the fact that we are asking new (and in my view much better) questions is a sign of further change to come.

Saturday, January 30, 2016

'Networks and Macroeconomic Shocks'

Daron Acemoglu, Ufuk Akcigit, and William Kerr:

Networks and macroeconomic shocks, , VoxEU: How shocks reverberate throughout the economy has been a central question in macroeconomics. This column suggests that input-output linkages can play an important role in this issue. Supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The authors also find that the initial impact of an industry shock can be substantially amplified due to input-output linkages. 
How shocks propagate through the economy and contribute to fluctuations has been one of the central questions of macroeconomics. We argue that a major mechanism for such propagation is input-output linkages. Through input-output chains, shocks to one industry can influence ‘downstream’ industries that buy inputs from the affected industry, as well as ‘upstream’ industries that produce inputs for the affected industry. These interlinkages can propagate and potentially amplify the initial shock to further firms and industries not directly affected, influencing the macro economy to a much greater extent than the original shock could do on its own.
Introduction
The significance of the idea that a shock to one firm or disaggregated industry could be a major contributor to economic fluctuations was downplayed in Lucas’ (1977) famous essay on business cycles. Lucas suggested that due to the law of large numbers, idiosyncratic shocks to individual firms should cancel each other out when considering the economy in the aggregate, and therefore the broader impact should not be substantial. Recent research, however, has questioned this perspective. For example, Gabaix (2011) shows that when the firm size distribution has very fat tails, the power of the law of large numbers is diminished and shocks to large firms can overwhelm parallel shocks to small firms, allowing such shocks to have a substantial impact on the economy at large. Acemoglu et al. (2012) show how microeconomic shocks can be at the root of macroeconomic fluctuations when the input-output structure of an economy exhibits sufficient asymmetry in the role of some disaggregated industries as (major) suppliers to others.
In Acemoglu et al. (2016), we empirically document the role of input-output linkages as a mechanism for the transmission of industry-level shocks to the rest of the economy. Our approach differs from previous research in two primary ways.
  • First, whereas much prior work has focused on the medium-term implications of such network effects (e.g. over more than a decade), we emphasise the influence of these networks on short-term business cycles (e.g. over 1-3 years).
  • Second, we begin to separate types of shocks to the economy and the differences in how they propagate.
We build a model that predicts that supply-side (e.g. productivity, innovation) shocks primarily propagate downstream, whereas demand-side shocks (e.g. trade, government spending) propagate upstream. For example, a productivity shock to the tire industry will tend to strongly affect the downstream automobile industry, while a shock to government spending in the car industry will reverberate upstream to the tire industry.  We then demonstrate these findings empirically using four historical examples of industry-level shocks, two on the demand side and two on the supply side, and confirm the predictions of the model.
Model and prediction
We model an economy building on Long and Plosser (1983) and Acemoglu et al. (2012), in which each firm produces goods that are either consumed by other firms as inputs or sold in the final goods sector. The model predicts that supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The total impact of these shocks – taking into account that customers of customers will be also affected in response to supply-side shocks, and suppliers of suppliers will also be affected in response to demand-side shocks – is conveniently summarised by the Leontief inverse that played a central role in traditional input-output analysis.
The intuition behind the asymmetry in propagation for supply versus demand shocks relates to the Cobb-Douglas form of the production function and preferences. If productivity in a given industry is lowered by a shock, firms in that industry will produce fewer goods and the price of their goods will rise. Due to the Cobb-Douglas structure, these effects cancel each other out for upstream firms, leaving them unaffected, while downstream firms feel the increase in prices and consequently lower their overall production. On the other hand, if demand in a certain industry increases, firms in that industry increase production, necessitating a corresponding increase in input production by upstream firms. Because of constant returns to scale, however, the increased demand does not affect prices, and so downstream firms are not changed.
We also incorporate into the model geographic spillovers, showing that shocks in a particular industry will also influence industries that tend to be concentrated in the same area, as shown empirically by Autor et al. (2013) and Mian and Sufi (2014). The idea is that a shock to the first industry will influence local demand generally, and therefore will change demand, output, and employment for other local producers.
Empirics
We test the model’s prediction by examining the implications of four shocks: changes in imports from China; changes in federal government spending; total factor productivity (TFP) shocks; and productivity shocks coming from foreign industry patents. The first two are demand-side shocks; the latter two affect the supply side. For each of these shocks, we show the effects on directly impacted industries as well as upstream and downstream effects. Our core industry-level data is taken from the NBER-CES Manufacturing Industry Database for the years 1991-2009, while input-output linkages were drawn from the Bureau of Economic Analysis’ 1992 Input-Output Matrix and the 1991 County Business Patterns Database.
For brevity we focus here on the first example, where changes in imports from China influence the demand in affected industries. Of course, rising import penetration in the US for a given industry could be endogenous and connected to other factors, such as sagging US productivity growth. We therefore instrument import penetration from China to the US with rising trade from China to eight non-US countries relative to the industry’s market size in the US, following Autor et al. (2013) and Acemoglu et al. (2015). Chinese imports to other countries can be taken as exogenous metrics of the rise of China in trade over the last two decades.
The empirics confirm the predictions of our model. A one standard-deviation increase in imports from China reduces the affected industry’s value added growth by 3.4%, while a similar shock to consumers of that industry’s products leads to a 7.6% decline.
  • In other words, the upstream effect is nearly twice as large as the effect on the directly hit industry in a basic regression.
  • Downstream effects, on the other hand, are of opposite sign and do not change in a statistically significant manner, confirming the model’s prediction.
Figure 1 shows the impulse response function when our framework is adjusted to allow for lags and measure multipliers. Again, a one standard-deviation shock to value added through trade produces network effects that are much greater than the own effects on the industry.
  • We calculate that the effect of a shock to one industry on the entire economy is over six times as large as the effect on the industry itself, due to input-output linkages.
Similar effects are found for employment, and the findings are shown to be robust under many different specification checks.

Figure 1. Response to one SD value-add shock from Chinese imports

Kerr fig1 29 jan

The other three shocks – changes in government spending, TFP shocks and foreign patenting shocks – also broadly support the model’s predictions, with the first leading to upstream effects and the latter two leading to downstream effects. Similarly, extensions quantify that geographical proximity facilitates the propagation of the shocks, particularly those on the demand side. 
Conclusions
Shocks to particular industries can reverberate throughout the economy through networks of firms or industries that supply each other with inputs. Our work shows that these shocks are indeed powerfully transmitted through the input-output chain of the economy, and their initial impact can be substantially amplified. These findings open the way to a systematic investigation of the role of input-output linkages in underpinning rapid expansions and deep recessions, especially once we move away from simple, fully competitive models of the macro economy.
References
Acemoglu, D, U Akcigit, and W Kerr (2016), “Networks and the Macroeconomy: An Empirical Exploration”, NBER Macroeconomics Annual, forthcoming. NBER Working Paper 21344.
Acemoglu, D, V Carvalho, A Ozdaglar, and Al Tahbaz-Salehi (2012), “The Network Origins of Aggregate Fluctuations”, Econometrica, 80:5, 1977-2016.
Acemoglu, D, D Autor, D Dorn, G Hanson, and B Price (2015), “Import Competition and the Great U.S. Employment Sag of the 2000s”, Journal of Labor Economics, 34(S1), S141-S198.
Autor, D, D Dorn, and G Hanson (2013), “The China Syndrome: Local Labor Market Effects of Import Competition in the United States”, American Economic Review, 103:6, 2121-2168.
Gabaix, X (2011), “The Granular Origins of Aggregate Fluctuations”, Econometrica, 79, 733-772.
Long, J and C Plosser (1983), “Real Business Cycles”, Journal of Political Economy, 91:1, 39-69.
Lucas, R (1977), “Understanding Business Cycles”, Carnegie Rochester Conference Series on Public Policy, 5, 7-29.
Mian, A and A Sufi (2014), “What Explains the 2007-2009 Drop in Employment”, Econometrica, 82:6, 2197-2223.

Wednesday, January 13, 2016

'Is Mainstream Academic Macroeconomics Eclectic?'

Simon Wren-Lewis:

Is mainstream academic macroeconomics eclectic?: For economists, and those interested in macroeconomics as a discipline
Eric Lonergan has a short little post that is well worth reading..., it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.
If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. ... My answer is yes and no.
Let’s take the five ‘schools’ that Eric talks about. ... Indeed the variety of models that academic macro currently uses is far wider than this.
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really... This is because these models (unlike those of 40+ years ago) use a common language. ...
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).
It also means that the range of issues that models (DSGE models) can address is also huge. ...
The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.
You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.
I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world.  ...
The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

'The Validity of the Neo-Fisherian Hypothesis'

Narayana Kocherlakota:

Validity of the Neo-Fisherian Hypothesis: Warning: Super-Technical Material Follows

The neo-Fisherian hypothesis is as follows: If the central bank commits to peg the nominal interest rate at R, then the long-run level of inflation in the economy is increasing in R. Using finite horizon models, I show that the neo-Fisherian hypothesis is only valid if long-run inflation expectations rise at least one for one with the peg R. However, in an infinite horizon model, the neo-Fisherian hypothesis is always true. I argue that this result indicates why macroeconomists should use finite horizon models, not infinite horizon models. See this linked note and my recent NBER working paper for technical details.

In any finite horizon economy, the validity of the neo-Fisherian hypothesis depends on how sensitive long-run inflation expectations are to the specification of the interest rate peg.

  • If long-run inflation expectations rise less than one-for-one (or fall) with the interest rate peg, then the neo-Fisherian hypothesis is false.
  • If long-run inflation expectations rise at least one-for-one with the interest rate peg, then the neo-Fisherian hypothesis is true.

Intuitively, when the peg R is high, people anticipate tight future monetary policy. The future tightness of monetary policy pushes down on current inflation. The only way to offset this effect is for long-run inflation expectations to rise sufficiently in response to the peg.

In contrast, in an infinite horizon model, the neo-Fisherian hypothesis is valid - but only because of an odd discontinuity. As the horizon length converges to infinity, the level of inflation becomes infinitely sensitive to long-run inflation expectations. This means that, for almost all specifications of long-run inflation expectations, inflation converges to infinity or negative infinity as the horizon converges to infinity. Users of infinite horizon models typically discard all of these limiting “infinity” equilibria by setting the long-run expected inflation rate to be equal to the difference between R and r*. In this way, the use of an infinite horizon - as opposed to a long but finite horizon - creates a tight implicit restriction on the dependence of long-run inflation expectations on the interest rate peg

To summarize: The validity of the neo-Fisherian hypothesis depends on an empirical question: how do long-run inflation expectations depend on the central bank's peg? This empirical question is eliminated when we use infinite horizon models - but this is a reason not to use infinite horizon models.

In case you missed this from George Evans and Bruce McGough over the holidays (on learning models and the validity of the Neo-Fisherian Hyposthesis, also "super-technical"):

The Neo-Fisherian View and the Macro Learning Approach

I've been surprised that none of the Neo-Fisherians have responded.

Thursday, January 07, 2016

'Confidence as a Political Device'

Simon Wren-Lewis:

Confidence as a political device: This is a contribution to the discussion about models started by Krugman, DeLong and Summers, and in particular to the use of confidence. (Martin Sandbu has an excellent summary, although as you will see I think he is missing something.) The idea that confidence can on occasion be important, and that it can be modeled, is not (in my view) in dispute. For example the very existence of banks depends on confidence (that depositors can withdraw their money when they wish), and when that confidence disappears you get a bank run.
But the leap from the statement that ‘in some circumstances confidence matters’ to ‘we should worry about bond market confidence in an economy with its own central bank in the middle of a depression’ is a huge one...
When people invoke the idea of confidence, other people (particularly economists) should be automatically suspicious. The reason is that it frequently allows those who represent the group whose confidence is being invoked to further their own self interest. The financial markets are represented by City or Wall Street economists, and you invariably see market confidence being invoked to support a policy position they have some economic or political interest in. Bond market economists never saw a fiscal consolidation they did not like, so the saying goes, so of course market confidence is used to argue against fiscal expansion. Employers drum up the importance of maintaining their confidence whenever taxes on profits (or high incomes) are involved. As I argue in this paper, there is a generic reason why financial market economists play up the importance of market confidence, so they can act as high priests. (Did these same economists go on about the dangers of rising leverage when confidence really mattered, before the global financial crisis?)
The general lesson I would draw is this. If the economics point towards a conclusion, and people argue against it based on ‘confidence’, you should be very, very suspicious. You should ask where is the model (or at least a mutually consistent set of arguments), and where is the evidence that this model or set of arguments is applicable to this case? Policy makers who go with confidence based arguments that fail these tests because it accords with their instincts are, perhaps knowingly, following the political agenda of someone else.

Sunday, January 03, 2016

'Musings on Whether We Consciously Know More or Less than What Is in Our Models…'

Brad DeLong:

Musings on Whether We Consciously Know More or Less than What Is in Our Models…: Larry Summers presents as an example of his contention that we know more than is in our models–that our models are more a filing system, and more a way of efficiently conveying part of what we know, than they are an idea-generating mechanism–Paul Krugman’s Mundell-Fleming lecture, and its contention that floating exchange-rate countries that can borrow in their own currency should not fear capital flight in a utility trap. He points to Olivier Blanchard et al.’s empirical finding that capital outflows do indeed appear to be not expansionary but contractionary ...

[There's quite a bit more in Brad's post.]

Wednesday, December 30, 2015

'The Neo-Fisherian View and the Macro Learning Approach'

I asked my colleagues George Evans and Bruce McGough if they would like to respond to a recent post by Simon Wren-Lewis, "Woodford’s reflexive equilibrium" approach to learning:

The neo-Fisherian view and the macro learning approach
George W. Evans and Bruce McGough
Economics Department, University of Oregon
December 30, 2015

Cochrane (2015) argues that low interest rates are deflationary — a view that is sometimes called neo-Fisherian. In this paper John Cochrane argues that raising the interest rate and pegging it at a higher level will raise the inflation rate in accordance with the Fisher equation, and works through the details of this in a New Keynesian model.

Garcia-Schmidt and Woodford (2015) argue that the neo-Fisherian claim is incorrect and that low interest rates are both expansionary and inflationary. In making this argument Mariana Garcia-Schmidt and Michael Woodford use an approach that has a lot of common ground with the macro learning literature, which focuses on how economic agents might come to form expectations, and in particular whether coordination on a particular rational expectations equilibrium (REE) is plausible. This literature examines the stability of an REE under learning and has found that interest-rate pegs of the type discussed by Cochrane lead to REE that are not stable under learning. Garcia-Schmidt and Woodford (2015) obtain an analogous instability result using a new bounded-rationality approach that provides specific predictions for monetary policy. There are novel methodological and policy results in the Garcia-Schmidt and Woodford (2015) paper. However, we will here focus on the common ground with other papers in the learning literature that also argue against the neo-Fisherian claim.

The macro learning literature posits that agents start with boundedly rational expectations e.g. based on possibly non-RE forecasting rules. These expectations are incorporated into a “temporary equilibrium” (TE) environment that yields the model’s endogenous outcomes. The TE environment has two essential components: a decision-theoretic framework which specifies the decisions made by agents (households, firms etc.) given their states (values of exogenous and pre-determined endogenous state variables) and expectations;1 and a market-clearing framework that coordinates the agents’ decisions and determines the values of the model’s endogenous variables. It is useful to observe that, taken together, the two components of the TE environment yield the “TE-map” that takes expectations and (aggregate and idiosyncratic) states to outcomes.

The adaptive learning framework, which is the most popular formulation of learning in macro, proceeds recursively. Agents revise their forecast rules in light of the data realized in the previous period, e.g. by updating their forecast rules econometrically. The exogenous shocks are then realized, expectations are formed, and a new temporary equilibrium results. The equilibrium path under learning is defined recursively. One can then study whether the economy under adaptive learning converges over time to the REE of interest.2

The essential point of the learning literature is that an REE, to be credible, needs an explanation for how economic agents come to coordinate on it. This point is acute in models in which there are multiple RE solutions, as can arise in a wide range of dynamic macro models. This has been an issue in particular in the New Keynesian model, but it also arises, for example, in overlapping generations models and in RBC models with distortions. The macro learning literature provides a theory for how agents might learn over time to forecast rationally, i.e. to come to have RE (rational expectations). The adaptive learning approach found that agents will over time come to have rational expectations (RE) by updating their econometric forecasting models provided the REE satisfies “expectational stability” (E-stability) conditions. If these conditions are not satisfied then convergence to the REE will not occur and hence it is implausible that agents would be able to coordinate on the REE. E-stability then also acts as a selection device in cases in which there are multiple REE.

The adaptive learning approach has the attractive feature that the degree of rationality of the agents is natural: though agents are boundedly rational, they are still fairly sophisticated, estimating and updating their forecasting models using statistical learning schemes. For a wide range of models this gives plausible results. For example, in the basic Muth cobweb model, the REE is learnable if supply and demand have their usual slopes; however, the REE, though still unique, is not learnable if the demand curve is upward sloping and steeper than the supply curve. In an overlapping generations model, Lucas (1986) used an adaptive learning scheme to show that though the overlapping generations model of money has multiple REE, learning dynamics converge to the monetary steady state, not to the autarky solution. Early analytical adaptive learning results were obtained in Bray and Savin (1986) and the formal framework was greatly extended in Marcet and Sargent (1989). The book by Evans and Honkapohja (2001) develops the E-stability principle and includes many applications. Many more applications of adaptive learning have been published over the last fifteen years.

There are other approaches to learning in macro that have a related theoretical motivation, e.g. the “eductive” approach of Guesnerie asks whether mental reasoning by hyper-rational agents, with common knowledge of the structure and of the rationality of other agents, will lead to coordination on an REE. A fair amount is known about the connections between the stability conditions of the alternative adaptive and eductive learning approaches.3 The Garcia-Schmidt and Woodford (2015) “reflective equilibrium” concept provides a new approach that draws on both the adaptive and eductive strands as well as on the “calculation equilibrium” learning model of Evans and Ramey (1992, 1995, 1998). These connections are outlined in Section 2 of Garcia-Schmidt and Woodford (2015).4

The key insight of these various learning approaches is that one cannot simply take RE (which in the nonstochastic case reduces to PF, i.e. perfect foresight) as given. An REE is an equilibrium that begs an explanation for how it can be attained. The various learning approaches rely on a temporary equilibrium framework, outlined above, which goes back to Hicks (1946). A big advantage of the TE framework, when developed at the agent level and aggregated, is that in conjunction with the learning model an explicit causal story can be developed for how the economy evolves over time.

The lack of a TE or learning framework in Cochrane (2011, 2015) is a critical omission. Cochrane (2009) criticized the Taylor principle in NK models as requiring implausible assumptions on what the Fed would do to enforce its desired equilibrium path; however, this view simply reflects the lack of a learning perspective. McCallum (2009) argued that for a monetary rule satisfying the Taylor principle the usual RE solution used by NK modelers is stable under adaptive learning, while the non-fundamental solution bubble solution is not. Cochrane (2009, 2011) claimed that these results hinged on the observability of shocks. In our paper “Observability and Equilibrium Selection,” Evans and McGough (2015b), we develop the theory of adaptive learning when fundamental shocks are unobservable, and then, as a central application, we consider the flexible-price NK model used by Cochrane and McCallum in their debate. We carefully develop this application using an agent-level temporary equilibrium approach and closing the model under adaptive learning. We find that if the Taylor principle is satisfied, then the usual solution is robustly stable under learning, while the non-fundamental price-level bubble solution is not. Adaptive learning thus operates as a selection criterion and it singles out the usual RE solution adopted by proponents of the NK model. Furthermore, when monetary policy does not obey the Taylor principle then neither of the solutions is robustly stable under learning; an interest-rate peg is an extreme form of such a policy, and the adaptive learning perspective cautions that this will lead to instability. We discuss this further below.

The agent-level/adaptive learning approach used in Evans and McGough (2015b) allows us to specifically address several points raised by Cochrane. He is concerned that there is no causal mechanism that pins down prices. The TE map provides this, in the usual way, through market clearing given expectations of future variables. Cochrane also states that the lack of a mechanism means that the NK paradigm requires that the policymakers be interpreted as threatening to “blow up” the economy if the standard solution is not selected by agents.5 This is not the case. As we say in our paper (p. 24-5), “inflation is determined in temporary equilibrium, based on expectations that are revised over time in response to observed data. Threats by the Fed are neither made nor needed ... [agents simply] make forecasts the same way that time-series econometricians typically forecast: by estimating least-squares projections of the variables being forecasted on the relevant observables.”

Let us now return to the issue of interest rate pegs and the impact of changing the level of an interest rate peg. The central adaptive learning result is that interest rate pegs give REE that are unstable under learning. This result was first given in Howitt (1992). A complementary result was given in Evans and Honkapohja (2003) for time-varying interest rate pegs designed to optimally respond to fundamental shocks. As discussed above, Evans and McGough (2015b) show that the instability result also obtains when the fundamental shocks are not observable and the Taylor principle is not satisfied. The economic intuition in the NK model is very strong and is essentially as follows. Suppose we are at an REE (or PFE) at a fixed interest rate and with expected inflation at the level dictated by the Fisher equation. Suppose that there is a small increase in expected inflation. With a fixed nominal interest rate this leads to a lower real interest rate, which increases aggregate demand and output. This in turn leads to higher inflation, which under adaptive learning leads to higher expected inflation, destabilizing the system. (The details of the evolution of expectations and the model dynamics depend, of course, on the precise decision rules and econometric forecasting model used by agents). In an analogous way, expected inflation slightly lower than the REE/PFE level leads to cumulatively lower levels of inflation, output and expected inflation.

Returning to the NK model, additional insight is obtained by considering a nonlinear NK model with a global Taylor rule that leads to two steady states. This model was studied by Benhabib, Schmidt-Grohe and Uribe in a series of papers, e.g. Benhabib, Schmitt-Grohe, and Uribe (2001), which show that with an interest-rate rule following the Taylor principle at the target inflation rate, the zero-lower bound (ZLB) on interest rates implies the existence of an unintended PFE low inflation or deflation steady state (and indeed a continuum of PFE paths to it) at which the Taylor principle does not hold (a special case of which is a local interest rate peg at the ZLB). From a PF/RE viewpoint these are all valid solutions. From the adaptive learning perspective, however, they differ in terms of stability. Evans, Guse, and Honkapohja (2008) and Benhabib, Evans, and Honkapohja (2014) show that the targeted steady state is locally stable under learning with a large basin of attraction, while the unintended low inflation/deflation steady state is not locally stable under learning: small deviations from it lead either back to the targeted steady state or into a deflation trap, in which inflation and output fall over time. From a learning viewpoint this deflation trap should be a major concern for policy.6,7

Finally, let us return to Cochrane (2015). Cochrane points out that at the ZLB peg there has been low but relatively steady (or gently declining) inflation in the US, rather than a serious deflationary spiral. This point echoes Jim Bullard’s concern in Bullard (2010) about the adaptive learning instability result: we effectively have an interest rate peg at the ZLB but we seem to have a fairly stable inflation rate, so does this indicate that the learning literature may here be on the wrong track?

This issue is addressed by Evans, Honkapohja, and Mitra (2015) (EHM2015). They first point out that from a policy viewpoint the major concern at the ZLB has not been low inflation or deflation per se. Instead it is its association with low levels of aggregate output, high levels of unemployment and a more general stagnation. However, the deflation steady state at the ZLB in the NK model has virtually the same level of aggregate output as the targeted steady state. The PFE at the ZLB interest rate peg is not a low level output equilibrium, and if we were in that equilibrium there would not be the concern that policy-makers have shown. (Temporary discount rate or credit market shocks of course can lead to recession at the ZLB but their low output effects vanish as soon as the shocks vanish).

In EHM2015 steady mild deflation is consistent with low output and stagnation at the ZLB.8 They note that many commentators have remarked that the behavior of the NK Phillips relation is different from standard theory at very low output levels. EHM2015 therefore imposes lower bounds on inflation and consumption, which can become relevant when agents become sufficiently pessimistic. If the inflation lower bound is below the unintended low steady state inflation rate, a third “stagnation” steady state is created at the ZLB. The stagnation steady state, like the targeted steady state is locally stable under learning, and arises under learning if output and inflation expectations are too pessimistic. A large temporary fiscal stimulus can dislodge the economy from the stagnation trap, and a smaller stimulus can be sufficient if applied earlier. Raising interest rates does not help in the stagnation state and at an early stage it can push the economy into the stagnation trap.

In summary, the learning approach argues forcefully against the neo- Fisherian view.

Footnotes

1With infinitely-lived agents there are several natural implementations of optimizing decision rules, including short-horizon Euler-equation or shadow-price learning approaches(see, e.g., Evans and Honkapohja (2006) and Evans and McGough (2015a)) and the anticipated utility or infinte-horizon approaches of Preston (2005) and Eusepi and Preston (2010).

2An additional advantage of using learning is that learning dynamics give expanded scope for fitting the data as well as explaining experimental findings.

3The TE map is the basis for the map at the core of any specified learning scheme, which in turn determines the associated stability conditions.

4There are also connections to both the infinite-horizon learning approach to anticipated policy developed in Evans, Honkapohja, and Mitra (2009) and the eductive stability framework in Evans, Guesnerie, and McGough (2015).

5This point is repeated in Section 6.4 of Cochrane (2015): “The main point: such models presume that the Fed induces instability in an otherwise stable economy, a non-credible off-equilibrium threat to hyperinflate the economy for all but one chosen equilibrium.”

6And the risk of sinking into deflation clearly has been a major concern for policymakers in the US, during and following both the 2001 recession and the 2007 - 2009 recession. It has remained a concern in Europe and Japan as well as in Japan during the 1990s.

7Experimnetal work with stylized NK economies has found that entering deflation traps is a real possibility. See Hommes and Salle (2015).

8See also Evans (2013) for a partial and less general version of this argument.

References

Benhabib, J., G. W. Evans, and S. Honkapohja (2014): “Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?,” Journal of Economic Dynamics and Control, 45, 220—238.

Benhabib, J., S. Schmitt-Grohe, and M. Uribe (2001): “The Perils of Taylor Rules,” Journal of Economic Theory, 96, 40—69.

Bray, M., and N. Savin (1986): “Rational Expectations Equilibria, Learning, and Model Specification,” Econometrica, 54, 1129—1160.

Bullard, J. (2010): “Seven Faces of The Peril,” Federal Reserve Bank of St. Louis Review, 92, 339—352.

Cochrane, J. H. (2009): “Can Learnability Save New Keynesian Models?,” Journal of Monetary Economics, 56, 1109—1113.

_______ (2015): “Do Higher Interest Rates Raise or Lower Inflation?, "Working paper, University of Chicago Booth School of Business.

Dixon, H., and N. Rankin (eds.) (1995): The New Macroeconomics: Imperfect Markets and Policy Effectiveness. Cambridge University Press, Cambridge UK.

Eusepi, S., and B. Preston (2010): “Central Bank Communication and Expectations Stabilization,” American Economic Journal: Macroeconomics, 2, 235—271.

Evans, G.W. (2013): “The Stagnation Regime of the New KeynesianModel and Recent US Policy,” in Sargent and Vilmunen (2013), chap. 4.

Evans, G. W., R. Guesnerie, and B. McGough (2015): “Eductive Stability in Real Business Cycle Models,” mimeo.

Evans, G. W., E. Guse, and S. Honkapohja (2008): “Liquidity Traps, Learning and Stagnation,” European Economic Review, 52, 1438—1463.

Evans, G. W., and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.

_______ (2003): “Expectations and the Stability Problem for Optimal Monetary Policies,” Review of Economic Studies, 70, 807—824.

_______ (2006): “Monetary Policy, Expectations and Commitment,” Scandinavian Journal of Economics, 108, 15—38.

Evans, G. W., S. Honkapohja, and K. Mitra (2009): “Anticipated Fiscal Policy and Learning,” Journal of Monetary Economics, 56, 930— 953

_______ (2015): “Expectations, Stagnation and Fiscal Policy,” Working paper, University of Oregon.

Evans, G. W., and B. McGough (2015a): “Learning to Optimize,” mimeo, University of Oregon.

_______ (2015b): “Observability and Equilibrium Selection,” mimeo, University of Oregon.

Evans, G. W., and G. Ramey (1992): “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review, 82, 207—224.

_______ (1995): “Expectation Calculation, Hyperinflation and Currency Collapse,” in Dixon and Rankin (1995), chap. 15, pp. 307—336.

_______ (1998): “Calculation, Adaptation and Rational Expectations,” Macroeconomic Dynamics, 2, 156—182.

Garcia-Schmidt, M., and M. Woodford (2015): “Are Low Interest Rates Deflationary? A Paradox of Perfect Foresight Analysis,” Working paper, Columbia University.

Hicks, J. R. (1946): Value and Capital, Second edition. Oxford University Press, Oxford UK.

Hommes, Cars H., M. D., and I. Salle (2015): “Monetary and Fiscal Policy Design at the Zero Lower Bound: Evidence from the lab,” mimeo., CeNDEF, University of Amsterdam.

Howitt, P. (1992): “Interest Rate Control and Nonconvergence to Rational Expectations,” Journal of Political Economy, 100, 776—800.

Lucas, Jr., R. E. (1986): “Adaptive Behavior and Economic Theory,” Journal of Business, Supplement, 59, S401—S426.

Marcet, A., and T. J. Sargent (1989): “Convergence of Least-Squares Learning Mechanisms in Self-Referential Linear Stochastic Models,” Journal of Economic Theory, 48, 337—368.

McCallum, B. T. (2009): “Inflation Determination with Taylor Rules: Is New-Keynesian Analysis Critically Flawed?,” Journal of Monetary Economic Dynamics, 56, 1101—1108.

Preston, B. (2005): “Learning about Monetary Policy Rules when Long- Horizon Expectations Matter,” International Journal of Central Banking, 1, 81—126.

Sargent, T. J., and J. Vilmunen (eds.) (2013): Macroeconomics at the Service of Public Policy. Oxford University Press.

Sunday, December 20, 2015

'The FTPL Version of the Neo-Fisherian Proposition'

I've never paid much attention to the fiscal theory of the price level:

The FTPL version of the Neo-Fisherian proposition: The Neo-Fisherian doctrine is the idea that a permanent increase in a flat nominal interest rate path will (eventually) raise the inflation rate. It is then suggested that current below target inflation is a consequence of fixing rates at their lower bound, and rates should be raised to increase inflation. David Andolfatto says there are two versions of this doctrine. The first he associates with the work of Stephanie Schmitt-Grohe and Martin Uribe, which I discussed here. He like me is not sold on this interpretation, for I think much the same reason. ... But he favours a different interpretation, based on the Fiscal Theory of the Price Level (FTPL).

Let me first briefly outline my own interpretation of the FTPL. This looks at the possibility of a fiscal regime where there is no attempt to stabilize debt. Government spending and taxes are set independently of the level or sustainability of government debt. The conventional and quite natural response to the possibility of that regime is to say it is unstable. But there is another possibility, which is that monetary policy stabilizes debt. Again a natural response would be to say that such a monetary policy regime is bound to be inconsistent with hitting an inflation target in the long run, but that is incorrect. ...

A constant nominal interest rate policy is normally thought to be indeterminate because the price level is not pinned down, even though the expected level of inflation is. In the FTPL, the price level is pinned down by the need for the government budget to balance at arbitrary and constant levels for taxes and spending. ...

I have a ... serious problem with this FTPL interpretation in the current environment. The belief that people would need to have for the FTPL to be relevant - that the government would not react to higher deficits by reducing government spending or raising taxes - does not seem to be credible, given that austerity is all about them doing exactly this despite being in a recession. As a result, I still find the Neo-Fisherian proposition, with either interpretation, somewhat unrealistic.

Thursday, December 17, 2015

'Sticky' Sales'

Are prices sticky?:

“Sticky” sales, by Phil Davies,The Region, FRG Minneapolis: Sales are ubiquitous in the U.S. economy. Black Friday, President’s Day, Mother’s Day, the Fourth of July; almost any occasion is cause for price cutting, accompanied by prominent signage, balloons and ads in traditional and social media to make the savings known far and wide. Retailers also put on sales ostensibly to clear out inventory, celebrate being on the sidewalk and go out of business.
Economists are interested in sales, not because they want cheap stuff (well, maybe they’re as partial to a deal as anyone), but because the role of sales has a bearing on a question central to macroeconomics: How flexible are prices? Price flexibility—how quickly prices adjust to changes in costs or demand—is crucial to understanding how shocks of any kind, including fiscal and monetary policy, affect economic performance.
Retail prices rise and fall frequently as merchants put items on sale and then restore the regular, or shelf, price. Indeed, the bulk of weekly and monthly variance in individual prices is due to sales promotions, not changes in regular prices. But there’s a lively debate in economics about the true flexibility of sale prices, from a macro perspective; for all their seeming fluidity, how readily do sales respond to changes in underlying costs and unexpected events that alter economic conditions?
How sale prices respond to wholesale cost shocks and broader macroeconomic shocks such as an increase in government spending or monetary policy stimulus, or a decrease in global aggregate demand, affects the flexibility of aggregate retail prices, with profound implications for monetary policy and the accuracy of macroeconomic models that guide policymaking.
Monetary policy as a tool for influencing the economy depends on sticky prices—the idea that prices don’t adjust instantly to shifts in demand caused by changes in money supply. If they did, an increase in demand for goods and services due to monetary easing would trigger an immediate price rise, suppressing demand and leaving economic output and employment unchanged. Thus, the stickier are prices, the more effective is monetary policy in modulating economic growth in the short and medium run. (Economists generally agree that money is neutral in the long run; that is, over a long enough period of time, prices are actually quite flexible, so monetary policy has no long-run effect on the real economy.)
Recent work by Ben Malin, a senior research economist at the Minneapolis Fed, provides insight into the import of temporary sales for price stickiness and thus monetary policy. In “Informational Rigidities and the Stickiness of Temporary Sales” (Minneapolis Fed Staff Report 513), Malin uses a rich data set of prices from a U.S. retail chain to investigate how retail prices adjust in response to wholesale price increases and other economic shocks. Joining Malin in the research are economists Emi Nakamura and Jón Steinsson of Columbia University, and marketing professors Eric Anderson and Duncan Simester of Northwestern University and MIT, respectively.
Surprisingly, the authors find no change in the frequency and depth of price cuts in response to shocks. Their analysis, which also taps micro price data underlying the consumer price index to look at how sales at a representative sample of U.S. retailers respond to booms and downturns, shows that merchants rely exclusively on regular prices to adapt to cost changes and evolving economic conditions. The research “supports the view that the behavior of regular prices is what matters for aggregate price flexibility,” Malin said in interview. ...

Wednesday, December 16, 2015

'The Methodology of Empirical Macroeconomics'

Brad DeLong:

Must-Read: Kevin Hoover: The Methodology of Empirical Macroeconomics: The combination of representative-agent modeling and utility-based “microfoundations” was always a game of intellectual Three-Card Monte. Why do you ask? Why don’t we fund sociologists to investigate for what reasons–other than being almost guaranteed to produce conclusions ideologically-pleasing to some–it has flourished for a generation in spite of having no empirical support and no theoretical coherence?
Kevin Hoover: The Methodology of Empirical Macroeconomics: “Given what we know about representative-agent models…
…there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. ...

Tuesday, December 01, 2015

'The Centrality of Policy to How Long Recessions Last'

Simon Wren-Lewis:

The centrality of policy to how long recessions last: Paul Krugman reminds us that one of the most misguided questions in macroeconomics is ‘are business cycles self-correcting’. ... That answer ... only holds for a particular set of monetary policy rules (plus assumptions about fiscal policy).
It is very easy to see this. Suppose monetary policy is so astute that it knows perfectly all the shocks that hit the economy, and how interest rates influence that economy. In that case absent the Zero Lower Bound the business cycle would disappear, whatever the speed of price adjustment. Or... As Nick Rowe points out, if you had a really bad monetary policy recessions could last forever.
A better answer to both questions (self-correction and how long business cycles last) is it all depends on monetary policy. Actually even that answer makes an implicit assumption, which is that there is no fiscal (de)stabilisation. The correct answer to both questions is that it depends first and foremost on policy. The speed of price adjustment only becomes central for particular policy rules.
So why do many economists (including occasionally some macroeconomists) get this wrong? ... It could be just an unfortunate accident. We are so used to teaching about fixed money supply rules (or in my case Taylor rules), that we can take those rules for granted. But there is also a more interesting answer. To some economists with a particular point of view, the idea that getting policy right might be essential to whether the economy self-corrects from shocks is troubling. ...
Focusing on this logic alone can lead to big mistakes. I have heard a number of times good economists say that in 2015 we can no longer be in a demand deficient recession, because price adjustment cannot be that slow. This mistake happens because they take good policy for granted..., with sub-optimal policy the length of recessions has much more to do with that bad policy than it has to do with the speed of price adjustment.
Just how misleading a focus on the speed of price adjustment can be becomes evident at the Zero Lower Bound. With nominal interest rates stuck at zero, rapid price adjustment will make the recession worse, not better. Price rigidity may be a condition for the existence of business cycles, but it can have very little to do with their duration.

And I as noted in my last column, the evidence is mounting that that poor policy can do more than slow a recovery, it can also permanently reduce our productive capacity.

Saturday, November 28, 2015

'Demand, Supply, and Macroeconomic Models'

Paul Krugman on macroeconomic models:

Demand, Supply, and Macroeconomic Models: I’m supposed to do a presentation next week about “shifts in economic models,” which has me trying to systematize my thought about what the crisis and aftermath have and haven’t changed my understanding of macroeconomics. And it seems to me that there is an important theme here: it’s the supply side, stupid. ...

Friday, November 20, 2015

'Some Big Changes in Macroeconomic Thinking from Lawrence Summers'

Adam Posen:

Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.

The  full text explains these points in more detail (I left out one point on the measurement of productivity).

Thursday, November 05, 2015

'Public Investment: has George Started listening to Economists?'

[Running very late today, so three quick posts to get something up besides links -- I probably chose this one because my name was mentioned. See the sidebar for more new links.]

Simon Wren-Lewis:

Public investment: has George started listening to economists?: I have in the past wondered just how large the majority among academic economists would be for additional public investment right now. The economic case for investing when the cost of borrowing is so cheap (particularly when the government can issue 30 year fixed interest debt) is overwhelming. I had guessed the majority would be pretty large just by personal observation. Economists who are not known for their anti-austerity views, like Ken Rogoff, tend to support additional public investment.
Thanks to a piece by Mark Thoma I now have some evidence. His article is actually about ideological bias in economics, and is well worth reading on that account, but it uses results from the ChicagoBooth survey of leading US economists. I have used this survey’s results on the impact of fiscal policy before, but they have asked a similar question about public investment. It is
“Because the US has underspent on new projects, maintenance, or both, the federal government has an opportunity to increase average incomes by spending more on roads, railways, bridges and airports.”
Not one of the nearly 50 economists surveyed disagreed with this statement. What was interesting was that the economists were under no illusions that the political process in the US would be such that some bad projects would be undertaken as a result (see the follow-up question). Despite this, they still thought increasing investment would raise incomes.
The case for additional public investment is as strong in the UK (and Germany) as it is in the US. Yet since 2010 it appeared the government thought otherwise. ...
However since the election George Osborne seems to have had a change of heart. ...

'Business Cycle Theory vs. Growth Theory'

Nick Rowe:

Business cycle theory vs growth theory: Macroeconomics is divided into (short run) business cycle theory and (long run) growth theory.
Those of us who do business cycle theory have a bit of an inferiority complex (though you might not know it from listening to us argue). Because growth theory seems to be so much more important. Where would you rather live: in a rich country during a recession; or in a poor country during a boom? (Watch the flows of people voting or attempting to vote with their feet if you are not sure how most people would answer.) In the long run, productivity is about the only thing that matters.
We would feel better about ourselves, and what we are studying and teaching, if we could argue that taming the business cycle would improve long run growth.
Notice that I have deliberately personalised this question to make you aware of my personal bias. Macroeconomists like me, who do short run business cycle theory, want to think that what we are doing is important. We want to argue that taming the business cycle would improve long run growth.
(The Great Recession was great for my sort of macro; we haven't had so much fun since the 1970's. The Great Moderation was a boring time for macroeconomists like me, when we seemed to be victims of our own success; all the growth theorists were stealing our limelight.)
Why might business cycles lower the long run growth rate? ...

Tuesday, November 03, 2015

Summers: Advanced Economies are So Sick

Larry Summers:

Advanced economies are so sick we need a new way to think about them: ...Hysteresis Effects Blanchard Cerutti and I look at a sample of over 100 recessions from industrial countries over the last 50 years and examine their impact on long run output levels in an effort to understand what Blanchard and I had earlier called hysteresis effects. We find that in the vast majority of cases output never returns to previous trends. Indeed there appear to be more cases where recessions reduce the subsequent growth of output than where output returns to trend. In other words “super hysteresis” to use Larry Ball’s term is more frequent than “no hysteresis.” ...
In subsequent work Antonio Fatas and I have looked at the impact of fiscal policy surprises on long run output and long run output forecasts using a methodology pioneered by Blanchard and Leigh. ... We find that fiscal policy changes have large continuing effects on levels of output suggesting the importance of hysteresis. ...
Towards a New Macroeconomics My separate comments in the volume develop an idea I have pushed with little success for a long time. Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent this is true of the DSGE (dynamic stochastic general equilibrium) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations not on the level of output. This assumption is problematic...
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics.

Friday, October 30, 2015

'The Missing Lowflation Revolution'

Antonio Fatás:

The missing lowflation revolution: It will soon be eight years since the US Federal Reserve decided to bring its interest rate down to 0%. Other central banks have spent similar number of years (or much longer in the case of Japan) stuck at the zero lower bound. In these eight years central banks have used all their available tools to increase inflation closer to their target and boost growth with limited success. GDP growth has been weak or anemic, and there is very little hope that economies will ever go back to their pre-crisis trends.
Some of these trends have challenged the traditional view of academic economists and policy makers about how an economy works. ...
My own sense is that the view among academics and policy makers is not changing fast enough and some are just assuming that this would be a one-time event that will not be repeated in the future (even if we are still not out of the current event!).
The comparison with the 70s when stagflation produced a large change in the way academic and policy makers thought about their models and about the framework for monetary policy is striking. During those year a high inflation and low growth environment created a revolution among academics (moving away from the simple Phillips Curve) and policy makers (switching to anti-inflationary and independent central banks). How many more years of zero interest rate will it take to witness a similar change in our economic analysis?

Saturday, September 26, 2015

''A Few Less Obvious Answers'' on What is Wrong with Macroeconomics

From an interview with Olivier Blanchard:

...IMF Survey: In pushing the envelope, you also hosted three major Rethinking Macroeconomics conferences. What were the key insights and what are the key concerns on the macroeconomic front? 
Blanchard: Let me start with the obvious answer: That mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.
But let me give you a few less obvious answers:
The financial crisis raises a potentially existential crisis for macroeconomics. Practical macro is based on the assumption that there are fairly stable aggregate relations, so we do not need to keep track of each individual, firm, or financial institution—that we do not need to understand the details of the micro plumbing. We have learned that the plumbing, especially the financial plumbing, matters: the same aggregates can hide serious macro problems. How do we do macro then?
As a result of the crisis, a hundred intellectual flowers are blooming. Some are very old flowers: Hyman Minsky’s financial instability hypothesis. Kaldorian models of growth and inequality. Some propositions that would have been considered anathema in the past are being proposed by "serious" economists: For example, monetary financing of the fiscal deficit. Some fundamental assumptions are being challenged, for example the clean separation between cycles and trends: Hysteresis is making a comeback. Some of the econometric tools, based on a vision of the world as being stationary around a trend, are being challenged. This is all for the best.
Finally, there is a clear swing of the pendulum away from markets towards government intervention, be it macro prudential tools, capital controls, etc. Most macroeconomists are now solidly in a second best world. But this shift is happening with a twist—that is, with much skepticism about the efficiency of government intervention. ...

'Economics: What Went Right'

Paul Krugman returns to a familiar theme:

Economics: What Went Right: ...I’m at EconEd; here are my slides for later today. The theme of my talk is something I’ve emphasized a lot over the past few years: basic macroeconomics has actually worked remarkably well in the post-crisis world, with those of us who took our Hicks seriously calling the big stuff — the effects of monetary and fiscal policy — right, and those who went with their gut getting it all wrong. ...
One thing I do try is to concede that one piece of the conventional story hasn’t worked that well, namely the Phillips curve, where the “clockwise spirals” of previous protracted large output gaps haven’t materialized. Maybe it’s about what happens at very low inflation rates.
What’s notable about the Fed’s urge to raise rates, however, is that Fed officials, including Janet Yellen, are acting as if they have high confidence in their models of inflation dynamics –which is the one thing we really haven’t done well at recently. I really fear that we’re looking at incestuous amplification here.

Agree about the uncertainty about inflation dynamics, but fear Fed officials will interpret it as risks on the upside that must be nullified through interest rate hikes. As for the Phillips curve, here's a graph from his talk:

Image4

As Krugman says, "Maybe it’s about what happens at very low inflation rates." I would add that the combination of the zero bound, low inflation, and downward wage rigidity may be able to explain the change in the Phillips curve -- I'm not quite ready to give up yet.

More generally, estimating inflation dynamics has been far from successful. For example, in many VAR models (a widely used empirical specification for establishing relationships among macroeconomic series), a shock to the federal funds rate often causes prices to go up (theory says they should go down). This can be overcome somewhat by including commodity prices in the model. The idea is that when the Fed expects inflation to go up it raises the federal funds rate, and since the policy does not complete eliminate the inflation, the data will show a positive correlation between the federal funds rate and inflation. Commodity prices are thought to embody and be sensitive to future expected inflation, so including this variable helps to solve the "price puzzle" as it is known. Even so, the results are highly sensitive to specification, and when you work with these models regularly you come away believing that the estimated price dynamics are not very good at all.

But the Fed must forecast in order to do policy. There are lags (though I've argued they are likely shorter than common wisdom suggests), and the Fed must act before a clear picture emerges. The question is how the Fed should react to such uncertainty about its inflation forecasts, and to me -- given the corresponding uncertainties about the state of the labor market and the asymmetric nature of the costs of mistakes about inflation and unemployment (plus the distributional issues -- who gets hurt by each mistake?), it counsels patience rather than urgency on the inflation front.

Tuesday, September 15, 2015

'Keynesianism Explained'

Paul Krugman:

Keynesianism Explained: Attacks on Keynesians in general, and on me in particular, rely heavily on an army of straw men — on knocking down claims about what people like me have predicted or asserted that have nothing to do with what we’ve actually said. But maybe we (or at least I) have been remiss, failing to offer a simple explanation of what it’s all about. I don’t mean the models; I mean the policy implications.
So here’s an attempt at a quick summary, followed by a sampling of typical bogus claims.
I would summarize the Keynesian view in terms of four points:
1. Economies sometimes produce much less than they could, and employ many fewer workers than they should, because there just isn’t enough spending. Such episodes can happen for a variety of reasons; the question is how to respond.
2. There are normally forces that tend to push the economy back toward full employment. But they work slowly; a hands-off policy toward depressed economies means accepting a long, unnecessary period of pain.
3. It is often possible to drastically shorten this period of pain and greatly reduce the human and financial losses by “printing money”, using the central bank’s power of currency creation to push interest rates down.
4. Sometimes, however, monetary policy loses its effectiveness, especially when rates are close to zero. In that case temporary deficit spending can provide a useful boost. And conversely, fiscal austerity in a depressed economy imposes large economic losses.
Is this a complicated, convoluted doctrine? ...
But strange things happen in the minds of critics. Again and again we see the following bogus claims about what Keynesians believe:
B1: Any economic recovery, no matter how slow and how delayed, proves Keynesian economics wrong. See [2] above for why that’s illiterate.
B2: Keynesians believe that printing money solves all problems. See [3]: printing money can solve one specific problem, an economy operating far below capacity. Nobody said that it can conjure up higher productivity, or cure the common cold.
B3: Keynesians always favor deficit spending, under all conditions. See [4]: The case for fiscal stimulus is quite restrictive, requiring both a depressed economy and severe limits to monetary policy. That just happens to be the world we’ve been living in lately.
I have no illusions that saying this obvious stuff will stop the usual suspects from engaging in the usual bogosity. But maybe this will help others respond when they do.

I would add:

5. Keynesian are not opposed to supply-side, growth enhancing policy. They types of taxes that are imposed matters, entrepreneurial activity should be encouraged, and so on. But these arguments should not be used as cover for redistribution of income to the wealthy through tax cuts and other means, or as a means of arguing for cuts to important social service programs. Not should they be used only to support tax cuts. Infrastructure spending is important for growth, an educated, healthy workforce is more productive, etc., etc. Economic growth is about much more than tax cuts for wealthy political donors.

On the other side, I would have added a point to B3:

B3a: Keynesians do not favor large government. They believe that deficits should be used to stimulate the economy in severe recessions (when monetary policy alone is not enough), but they also believe that the deficits should be paid for during good times (shave the peaks to fill the troughs and stabilize the path of GDP and employment). We haven't been very good at the pay for it during good times part, but Democrats can hardly be blamed for that (see tax cuts for the wealthy for openers).

Anything else, e.g. perhaps something like "Keynesians do not believe that helping people in need undermines their desire to work"?

Thursday, August 27, 2015

'The Day Macroeconomics Changed'

Simon Wren-Lewis:

The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.

Wednesday, August 26, 2015

Ray Fair: The Future of Macro

Ray Fair:

The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...

He goes on to explain why. He concludes with:

... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.

Sunday, August 23, 2015

''Young Economists Feel They Have to be Very Cautious''

From an interview of Paul Romer in the WSJ:

...Q: What kind of feedback have you received from colleagues in the profession?

A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”

Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?

A: There was an interesting [2013] Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...

It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...

Friday, August 21, 2015

'Scientists Do Not Demonize Dissenters. Nor Do They Worship Heroes.'

Paul Romer's latest entry on "mathiness" in economics ends with:

Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.

[The reference to Joan Robinson is clarified in the full text.]

Monday, August 17, 2015

Stiglitz: Towards a General Theory of Deep Downturns

This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):

Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation.[2] As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained.[3] These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.[4]
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy.[5] It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...

He concludes with:

IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets.[66] Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.

Tuesday, August 11, 2015

Macroeconomics: The Roads Not Yet Taken

My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:

Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...

The Macroeconomic Divide

Paul Krugman:

Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...

Tuesday, August 04, 2015

'Sarcasm and Science'

On the road again, so just a couple of quick posts. This is Paul Krugman:

Sarcasm and Science: Paul Romer continues his discussion of the wrong turn of freshwater economics, responding in part to my own entry, and makes a surprising suggestion — that Lucas and his followers were driven into their adversarial style by Robert Solow’s sarcasm...
Now, it’s true that people can get remarkably bent out of shape at the suggestion that they’re being silly and foolish. ...
But Romer’s account of the great wrong turn still sounds much too contingent to me...
At least as I perceived it then — and remember, I was a grad student as much of this was going on — there were two other big factors.
First, there was a political component. Equilibrium business cycle theory denied that fiscal or monetary policy could play a useful role in managing the economy, and this was a very appealing conclusion on one side of the political spectrum. This surely was a big reason the freshwater school immediately declared total victory over Keynes well before its approach had been properly vetted, and why it could not back down when the vetting actually took place and the doctrine was found wanting.
Second — and this may be less apparent to non-economists — there was the toolkit factor. Lucas-type models introduced a new set of modeling and mathematical tools — tools that required a significant investment of time and effort to learn, but which, once learned, let you impress everyone with your technical proficiency. For those who had made that investment, there was a real incentive to insist that models using those tools, and only models using those tools, were the way to go in all future research. ...
And of course at this point all of these factors have been greatly reinforced by the law of diminishing disciples: Lucas’s intellectual grandchildren are utterly unable to consider the possibility that they might be on the wrong track.

Sunday, August 02, 2015

'Freshwater’s Wrong Turn'

Paul Krugman follows up on Paul Romer's latest attack on "mathiness":

Freshwater’s Wrong Turn (Wonkish): Paul Romer has been writing a series of posts on the problem he calls “mathiness”, in which economists write down fairly hard-to-understand mathematical models accompanied by verbal claims that don’t actually match what’s going on in the math. Most recently, he has been recounting the pushback he’s getting from freshwater macro types, who seem him as allying himself with evil people like me — whereas he sees them as having turned away from science toward a legalistic, adversarial form of pleading.
You can guess where I stand on this. But in his latest, he notes some of the freshwater types appealing to their glorious past, claiming that Robert Lucas in particular has a record of intellectual transparency that should insulate him from criticism now. PR replies that Lucas once was like that, but no longer, and asks what happened.
Well, I’m pretty sure I know the answer. ...

It's hard to do an extract capturing all the points, so you'll likely want to read the full post, but in summary:

So what happened to freshwater, I’d argue, is that a movement that started by doing interesting work was corrupted by its early hubris; the braggadocio and trash-talking of the 1970s left its leaders unable to confront their intellectual problems, and sent them off on the path Paul now finds so troubling.

Recent tweets, email, etc. in response to posts I've done on mathiness reinforce just how unwilling many are to confront their tribalism. In the past, I've blamed the problems in macro on, in part, the sociology within the profession (leading to a less than scientific approach to problems as each side plays the advocacy game) and nothing that has happened lately has altered that view.

Saturday, August 01, 2015

'Microfoundations 2.0?'

Daniel Little:

Microfoundations 2.0?: The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:

The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)

The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)
Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

So what are the recent challenges? First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In once sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

I've talked about this many times, e.g., but it's worth making this point about aggregating from individual agents to macroeconomic aggregates once again (it deals, for one, with the emergent properties objection above -- it's the reason representative agent models are used, it seems to avoid the aggregation issue). This is from Kevin Hoover:

... Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates that there is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.

Sunday, July 26, 2015

'The F Story about the Great Inflation'

Simon Wren-Lewis:

The F story about the Great Inflation: Here F could stand for folk. The story that is often told by economists to their students goes as follows. After Phillips discovered his curve, which relates inflation to unemployment, Samuelson and Solow in 1960 suggested this implied a trade-off that policymakers could use. They could permanently have a bit less unemployment at the cost of a bit more inflation. Policymakers took up that option, but then could not understand why inflation didn’t just go up a bit, but kept on going up and up. Along came Milton Friedman to the rescue, who in a 1968 presidential address argued that inflation also depended on inflation expectations, which meant the long run Phillips curve was vertical and there was no permanent inflation unemployment trade-off. Policymakers then saw the light, and the steady rise in inflation seen in the 1960s and 1970s came to an end.
This is a neat little story, particularly if you like the idea that all great macroeconomic disasters stem from errors in mainstream macroeconomics. However even a half awake student should spot one small difficulty with this tale. Why did it take over 10 years for Friedman’s wisdom to be adopted by policymakers, while Samuelson and Solow’s alleged mistake seems to have been adopted quickly? Even if you think that the inflation problem only really started in the 1970s that imparts a 10 year lag into the knowledge transmission mechanism, which is a little strange.
However none of that matters, because this folk story is simply untrue. There has been some discussion of this in blogs (by Robert Waldmann in particular - see Mark Thoma here), and the best source on this is another F: James Forder. There are papers (e.g. here), but the most comprehensive source is now his book, which presents an exhaustive study of this folk story. It is, he argues, untrue in every respect. Not only did Samuelson and Solow not argue that there was a permanent inflation unemployment trade-off that policymakers could exploit, policymakers never believed there was such a trade-off. So how did this folk story arise? Quite simply from another F: Friedman himself, in his Nobel Prize lecture in 1977.
Forder discusses much else in his book, including the extent to which Friedman’s 1968 emphasis on the importance of expectations was particularly original (it wasn’t). He also describes how and why he thinks Friedman’s story became so embedded that it became folklore....