Wednesday, October 18, 2017
Tuesday, October 17, 2017
Thursday, October 12, 2017
There is a conference on Rethinking Macroeconomic Policy "coordinated by Olivier Blanchard ...and Lawrence H. Summers..." taking place today and tomorrow (wanted to go, but couldn't).
"Academic experts and policymakers will address the challenges to macroeconomic thinking and policymaking that today’s economic environment presents–low inflation despite low unemployment, the apparent interactions of rising inequality and stagnating productivity, and the unresponsiveness of long-term interest rates to rising public debt, among others." [Conference program, papers, presentations, and conference webcast.]
Here are links to the first two papers presented at the conference. First, Olivier Blanchard and Lawrence Summers:
Rethinking Stabilization Policy. Back to the Future (Preliminary): Nearly ten years after the onset of the Great Financial Crisis, both researchers and policy makers are still assessing the policy implications of the crisis and its aftermath. Previous major crises, from the Great Depression to the stagflation of the 1970s, profoundly changed both macroeconomics and macroeconomic policy. The question is whether this crisis should and will have similar effects.
We believe it should, although we are less sure it will. Rather obviously, the crisis has forced macroeconomists to (re)discover the role and the complexity of the financial sector, and the danger of financial crises. But the lessons should go largely beyond this, and force us to question a number of cherished beliefs. Among other things, the events of the last ten years have put into question the presumption that economies are self stabilizing, have raised again the issue of whether temporary shocks can have permanent effects, and have shown the importance of non linearities.
These call for a major reappraisal of macroeconomic thinking and macroeconomic policy. As the paper is a curtain raiser for a conference that will look in more detail at the implications for specific policies, we make no attempt at being encyclopedic and feel free to pick and choose the issues which we see as most salient. ...
Ben Bernanke posted a summary of his paper on his blog at Brookings:
Temporary price-level targeting: An alternative framework for monetary policy: Low nominal interest rates, low inflation, and slow economic growth pose challenges to central bankers. In particular, with estimates of the long-run equilibrium level of the real interest rate quite low, the next recession may occur at a time when the Fed has little room to cut short-term rates. As I have written previously and recent research has explored, problems associated with the zero-lower bound (ZLB) on interest rates could be severe and enduring. While the Fed has other useful policies in its toolkit such as quantitative easing and forward guidance, I am not confident that the current monetary toolbox would prove sufficient to address a sharp downturn. I am therefore sympathetic to the view of San Francisco Fed President John Williams and others that we should be thinking now about adjusting the framework in which monetary policy is conducted, to provide more policy “space” in the future. In a paper presented at the Peterson Institute for International Economics, I propose an option for an alternative monetary framework that I call a temporary price-level target—temporary, because it would apply only at times when short-term interest rates are at or very near zero.
To explain my proposal, I’ll begin by briefly discussing two other ideas for changing the monetary framework: raising the Fed’s inflation target above the current 2 percent level, and instituting a price-level target that would operate at all times. (See my paper for more details.) ...
Wednesday, August 09, 2017
Continuing with recent posts on models with multiple equilibria, this is by Luis C. Corchón (link is to a draft)
A Malthus-Swan Model of Economic Growth, by Luis C. Corchón Departmento de Economía Universidad Carlos III de Madrid Calle Madrid, Journal of Dynamics and Games, July 2016: [Open link to working paper]: Abstract. In this paper we introduce in the Solow-Swan growth model a labor supply based on Malthusian ideas. We show that this model may yield several steady states and that an increase in total factor productivity might decrease the capital-labor ratio in a stable equilibrium.
“Why has it taken economists so long to learn that demography influences growth?” Jeff Williamson (1998)
1. Introduction. In this note we propose a model which combines the classical Solow (1956) and Swan (1956) model with ideas about population growth that are borrowed from Malthus (1798). We will refer to our model as a Malthus- Swan-Solow (MSS) model. Our model has no technical progress, no institutional change, no human capital and no land.
We assume that the rate of growth of population depends on the real wage in a continuous way. This function is a generalization of one used by Hansen and Prescott (2002).
We find that, as in the classical Solow-Swan model, there exist a steady state value of capital-labor ratio, see Proposition 1. However this steady state is not necessarily unique: Proposition 2 and Example 3 show that there might be an odd number of steady state capital-labor ratios. And only the smaller and the larger values of these capital-labor ratio are locally stable, see Proposition 4. This implies that there might be two, very different values of per capita income in the steady state: one with a small and another with a large value of per capita income. Finally we find that an increase in total factor productivity may increase or decrease the capital-labor ratio in a stable steady state (Proposition 5) but it always increases per capita income (Proposition 6).
Summing up, the consideration of endogenous population in the Solow-Swan model brings new insights with respect to the standard model regarding the number, stability and comparative static properties of steady states. ...
Thursday, July 27, 2017
Macroeconomists Can't Keep Ignoring Race and Gender: The models that researchers build to understand the economy tend to be blind to race and gender, as if macroeconomic policies typically affect blacks the same as whites and women the same as men. Increasingly, that's looking like the wrong way to go about it. ...
Gender has figured in some work, but is typically ignored. To my knowledge, no work within this research program has focused explicitly on race.
Two recent papers cast significant doubt on the wisdom of assuming a raceless and genderless society...
Wednesday, July 12, 2017
Why recessions followed by austerity can have a persistent impact: Economics students are taught from an early age that in the short run aggregate demand matters, but in the long run output is determined from the supply side. A better way of putting it is that supply adjusts to demand in the short run, but demand adjusts to supply in the long run. A key part of that conceptualisation is that long run supply is independent of short run movements in demand (booms or recessions). It is a simple conceptualisation that has been extremely useful in the past. Just look at the UK data shown in this post: despite oil crises, monetarism and the ERM recessions, UK output per capita appeared to come back to an underlying 2.25% trend after WWII.
Except not any more: we are currently more than 15% below that trend and since Brexit that gap is growing larger every quarter. Across most advanced countries, it appears that the global financial crisis (GFC) has changed the trend in underlying growth. You will find plenty of stories and papers that try to explain this as a downturn in the growth of supply caused by slower technical progress that both predated the GFC and that is independent of the recession caused by it.
In a previous post I looked at recent empirical evidence that told a different story: that the recession that followed the GFC appears to be having a permanent impact on output. You can tell this story in two ways. The first is that, on this occasion for some reason, supply had adjusted to lower demand. The second is that we are still in a situation where demand is below supply. ... [explains] ...
All this shows that there is no absence of ideas about how a great recession and a slow recovery could have lasting effects. If there is a problem, it is more that the simple conceptualisation that I talked about at the beginning of this post has too great a grip on the way many people think. If any of the mechanisms I have talked about are important, then it means that the folly of austerity has had an impact that could last for at least a decade rather than just a few years.
Monday, May 29, 2017
Cecchetti & Schoenholtz:
The Phillips Curve: A Primer: Economists have debated the relationship between inflation and unemployment at least since A.W. Phillips’s study of U.K. data from 1861 to 1957 was published 60 years ago. The idea that a tight or slack labor market should result in faster or slower wage gains seems like a natural corollary to standard economic thinking about how prices respond to deviations of demand from supply. But, over the years, disputes about this Phillips curve relationship have been and remain fierce.
As the U.S. labor market tightens, and unemployment approaches levels we have not seen in more than 15 years, the question is whether inflation is going to make a comeback. More broadly, how useful is the Phillips curve as a guide for Federal Reserve policymakers who wish to achieve a 2-percent inflation target over the long run?
To anticipate our conclusion, despite evidence of a negative relationship between wage inflation and unemployment, central banks ought not rely on a stable Phillips curve for setting monetary policy. ...
Tuesday, April 25, 2017
What Can Be Done to Improve the Episteme of Economics?: I think this is needed:
INET: Education Initiative: "We are thrilled that you are joining us at the Berkeley Spring 2017 Education Convening, Friday, April 28th 9am-5pm Blum Hall, B100 #5570, Berkeley, CA 94720-5570... https://www.ineteconomics.org/education/curricula-modules/education-initiative
...Sign up here: https://fs24.formsite.com/inet/form97/index.html or email firstname.lastname@example.org...
I strongly share INET's view that things have gone horribly wrong, and that it is important to listen, learn, and brainstorm about how to improve economics education.
Let me just note six straws in the wind:
The macro-modeling discussion is wrong: The brilliant Olivier Blanchard https://piie.com/blogs/realtime-economic-issues-watch/need-least-five-classes-macro-models: "The current core... RBC (real business cycle) structure [model] with one main distortion, nominal rigidities, seems too much at odds with reality.... Both the Euler equation for consumers and the pricing equation for price-setters seem to imply, in combination with rational expectations, much too forward-lookingness.... The core model must have nominal rigidities, bounded rationality and limited horizons, incomplete markets and the role of debt..."
The macro-finance discussion is wrong: The efficient market hypothesis (EMH) claimed that movements in stock indexes were driven either by (a) changing rational expectations of future cash flows or by (b) changing rational expectations of interest rates on investment-grade bonds, so that expected returns were either (a) unchanged or (b) moved roughly one-for-one with returns on investment grade bonds. That claim lies in total shreds. Movements in stock indexes have either no utility-theoretic rationale at all or must be ascribed to huge and rapid changes in the curvature of investors' utility functions. Yet Robert Lucas claims that the EMH is perfect, perfect he tells us http://www.economist.com/node/14165405: "Fama tested the predictions of the EMH.... These tests could have come out either way, but they came out very favourably.... A flood of criticism which has served mainly to confirm the accuracy of the hypothesis.... Exceptions and 'anomalies' [are]... for the purposes of macroeconomic analysis and forecasting... too small to matter..."
The challenge posed by the 2007-9 financial crisis is too-often ignored: Tom Sargent https://www.minneapolisfed.org/publications/the-region/interview-with-thomas-sargent: "I was at Princeton then.... There were interesting discussions of many aspects of the financial crisis. But the sense was surely not that modern macro needed to be reconstructed.... Seminar participants were in the business of using the tools of modern macro, especially rational expectations theorizing, to shed light on the financial crisis..."
What smart economists have to say about policy is too-often dismissed: Then-Treasury Secretary Tim Geithner, according to Zach Goldfarb https://www.washingtonpost.com/blogs/wonkblog/post/geithner-stimulus-is-sugar-for-the-economy/2011/05/19/AGz9JvLH_blog.html: "The economic team went round and round. Geithner would hold his views close, but occasionally he would get frustrated. Once, as [Christina] Romer pressed for more stimulus spending, Geithner snapped. Stimulus, he told Romer, was 'sugar', and its effect was fleeting. The administration, he urged, needed to focus on long-term economic growth, and the first step was reining in the debt.... In the end, Obama signed into law only a relatively modest $13 billion jobs program, much less than what was favored by Romer and many other economists in the administration..."
The competitive model has too great a hold: "Brad, you're the only person I've ever heard say that Card-Krueger changed their mind on how much market power there is in the labor market..."
The problem is of very long standing indeed: John Maynard Keynes (1926) https://www.panarchy.org/keynes/laissezfaire.1926.html: "Some of the most important work of Alfred Marshall-to take one instance-was directed to the elucidation of the leading cases in which private interest and social interest are not harmonious. Nevertheless, the guarded and undogmatic attitude of the best economists has not prevailed against the general opinion that an individualistic laissez-faire is both what they ought to teach and what in fact they do teach..."
Monday, April 10, 2017
On the Need for (At Least) Five Classes of Macro Models: One of the best pieces of advice Rudi Dornbusch gave me was: Never talk about methodology. Just do it. Yet, I shall disobey and take the plunge.
The reason and the background for this blog is a project started by David Vines about DSGEs, how they performed in the crisis, and how they could be improved. Needled by his opinions, I wrote a PIIE Policy Brief. Then, in answer to the comments to the brief, I wrote a PIIE RealTime blog. And yet a third, another blog, each time hopefully a little wiser. I thought I was done, but David organized a one-day conference on the topic, from which I learned a lot and which has led me to write my final (?) piece on the topic.
This piece has a simple theme: We need different types of macro models. One type is not better than the other. They are all needed, and indeed they should all interact. Such remarks would be trivial and superfluous if that proposition were widely accepted, and there were no wars of religion. But it is not, and there are.
Here is my attempt at typology, distinguishing between five types. (I limit myself to general equilibrium models. Much of macro must, however, be about building the individual pieces, constructing partial equilibrium models, and examining the corresponding empirical micro and macro evidence, pieces on which the general equilibrium models must then build.) In doing so, I shall, with apologies, repeat some of what was in the previous blogs. ...
Friday, February 17, 2017
NAIRU bashing: The NAIRU is the level of unemployment at which inflation is stable. Ever since economists invented the concept people have poked fun at how difficult to measure and elusive the NAIRU appears to be, and these articles often end with the proclamation that it is time we ditched the concept. Even good journalists can do it. But few of these attempts to trash the NAIRU answer a very simple and obvious question - how else do we link the real economy to inflation? ...
The NAIRU is one of those economic concepts which is essential to understand the economy but is extremely difficult to measure. ...
While we should not be obsessed by the 1970s, we should not wipe it from our minds either. Then policy makers did in effect ditch the NAIRU, and we got uncomfortably high inflation. In 1980 in the US and UK policy changed and increased unemployment, and inflation fell. There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth. ...
Tuesday, February 07, 2017
Larry Christiano on why the Great Recession happened, why it lasted so long, why it wasn't foreseen, and how it’s changing macroeconomic theory (the excerpt below is about the last of these, how it's changing theory):
The Great Recession: A Macroeconomic Earthquake, Federal Reserve Bank of Minneapolis: ...Impact on macroeconomics The Great Recession is having an enormous impact on macroeconomics as a discipline, in two ways. First, it is leading economists to reconsider two theories that had largely been discredited or neglected. Second, it has led the profession to find ways to incorporate the financial sector into macroeconomic theory.
At its heart, the narrative described above characterizes the Great Recession as the response of the economy to a negative shock to the demand for goods all across the board. This is very much in the spirit of the traditional macroeconomic paradigm captured by the famous IS-LM (or Hicks-Hansen) model,9 which places demand shocks like this at the heart of its theory of business cycle fluctuations. Similarly, the paradox-of-thrift argument10 is also expressed naturally in the IS-LM model.
The IS-LM paradigm, together with the paradox of thrift and the notion that a decision by a group of people11 could give rise to a welfare-reducing drop in output, had been largely discredited among professional macroeconomists since the 1980s. But the Great Recession seems impossible to understand without invoking paradox-of-thrift logic and appealing to shocks in aggregate demand. As a consequence, the modern equivalent of the IS-LM model— the New Keynesian model—has returned to center stage.12 (To be fair, the return of the IS-LM model began in the late 1990s, but the Great Recession dramatically accelerated the process.)
The return of the dynamic version of the IS-LM model is revolutionary because that model is closely allied with the view that the economic system can sometimes become dysfunctional, necessitating some form of government intervention. This is a big shift from the dominant view in the macroeconomics profession in the wake of the costly high inflation of the 1970s. Because that inflation was viewed as a failure of policy, many economists in the 1980s were comfortable with models that imply markets work well by themselves and government intervention is typically unproductive.
Accounting for the financial sector
The Great Recession has had a second important effect on the practice of macroeconomics. Before the Great Recession, there was a consensus among professional macroeconomists that dysfunction in the financial sector could safely be ignored by macroeconomic theory. The idea was that what happens on Wall Street stays on Wall Street—that is, it has as little impact on the economy as what happens in Las Vegas casinos. This idea received support from the U.S. experiences in 1987 and the early 2000s, when the economy seemed unfazed by substantial stock market volatility. But the idea that financial markets could be ignored in macroeconomics died with the Great Recession.
Now macroeconomists are actively thinking about the financial system, how it interacts with the broader economy and how it should be regulated. This has necessitated the construction of new models that incorporate finance, and the models that are empirically successful have generally integrated financial factors into a version of the New Keynesian model, for the reasons discussed above. (See, for example, Christiano, Motto and Rostagno 2014.)
Economists have made much progress in this direction, too much to summarize in this brief essay. One particularly notable set of advances is seen in recent research by Mark Gertler, Nobuhiro Kiyotaki and Andrea Prestipino. (See Gertler and Kiyotaki 2015 and Gertler, Kiyotaki and Prestipino 2016.) In their models, banks finance long-term assets with short- term liabilities. This liquidity mismatch between assets and liabilities captures the essential reason that real world financial institutions are vulnerable to runs. As such, the model enables economists to think precisely about the narrative described above (and advocated by Bernanke 2010 and others) about what launched the Great Recession in 2007. Refining models of this kind is essential for understanding the root causes of severe economic downturns and for designing regulatory and other policies that can prevent a recurrence of disasters like the Great Recession.
Wednesday, December 07, 2016
This is by David Glasner:
A Primer on Equilibrium: After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”
That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution. ...
That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.
Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. ...
After a long discussion, he ends with:
The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.
Thursday, November 03, 2016
Ann Pettifor on mainstream economics: Ann has a article that talks about the underlying factor behind the Brexit vote. Her thesis, that it represents the discontent of those left behind by globalization, has been put forward by others. Unlike Brad DeLong, I have few problems with seeing this as a contributing factor to Brexit, because it is backed up by evidence, but like Brad DeLong I doubt it generalizes to other countries. Unfortunately her piece is spoilt by a final section that is a tirade against mainstream economists which goes way over the top. ...
Most economists have certainly encouraged the idea that globalization would increase overall prosperity, and they have been proved right. It is also true that many of these economists did not admit or stress enough that there would be losers as a result of this process who needed compensating from the increase in aggregate prosperity. But once again I doubt very much that anything would have changed if they had. And if they didn’t think enough about it in the past, they are now: see Paul De Grauwe here for example.
There is a regrettable (but understandable) tendency by heterodox economists on the left to try and pretend that economics and neoliberalism are somehow inextricably entwined. The reality is that neoliberal advocates do use some economic ideas as justification, but they ignore others which go in the opposite direction. As I often point out, many more academic economists spend their time analyzing market imperfections than trying to show markets always work on their own. They get Nobel prizes for this work. I find attempts to suggest that economics somehow helped create austerity particularly annoying, as I (and many others) have spent many blog posts showing that economic theory and evidence demonstrates that austerity was a huge mistake.
Wednesday, October 26, 2016
Being honest about ideological influence in economics: Noah Smith has an article that talks about Paul Romer’s recent critique of macroeconomics. ... He says the fundamental problem with macroeconomics is lack of data, which is why disputes seem to take so long to resolve. That is not in my view the whole story.
If we look at the rise of Real Business Cycle (RBC) research a few decades ago, that was only made possible because economists chose to ignore evidence about the nature of unemployment in recessions. There is overwhelming evidence that in a recession employment declines because workers are fired rather than choosing not to work, and that the resulting increase in unemployment is involuntary (those fired would have rather retained their job at their previous wage). Both facts are incompatible with the RBC model.
In the RBC model there is no problem with recessions, and no role for policy to attempt to prevent them or bring them to an end. The business cycle fluctuations in employment they generate are entirely voluntary. RBC researchers wanted to build models of business cycles that had nothing to do with sticky prices. Yet here again the evidence was quite clear...
Why would researchers try to build models of business cycles where these cycles required no policy intervention, and ignore key evidence in doing so? The obvious explanation is ideological. I cannot prove it was ideological, but it is difficult to understand why - in an area which as Noah says suffers from a lack of data - you would choose to develop theories that ignore some of the evidence you have. The fact that, as I argue here, this bias may have expressed itself in the insistence on following a particular methodology at the expense of others does not negate the importance of that bias. ...
I suspect there is a reluctance among the majority of economists to admit that some among them may not be following the scientific method but may instead be making choices on ideological grounds. This is the essence of Romer’s critique, first in his own area of growth economics and then for business cycle analysis. Denying or marginalizing the problem simply invites critics to apply to the whole profession a criticism that only applies to a minority.
Tuesday, October 18, 2016
Yellen poses important post-Great Recession macroeconomic questions: Last week at a Federal Reserve Bank of Boston conference, Federal Reserve Chair Janet Yellen gave a speech on macroeconomics research in the wake of the Great Recession. She ... lists four areas for research, but let’s look more closely at the first two groups of questions that she elevates.
The first is the influence of aggregate demand on aggregate supply. As Yellen notes, the traditional way of thinking about this relationship would be that demand, a short-run phenomenon, has no significant effect of aggregate supply, which determines long-run economic growth. The potential growth rate of an economy is determined by aggregate supply...
Yellen points to research that increasingly finds so called hysteresis effects in the macroeconomy. Hysteresis, a term borrowed from physics, is the idea that short-run shocks to the economy can alter its long-term trend. One example of hysteresis is workers who lose jobs in recessions and then aren’t drawn back into the labor market bur rather are permanently locked out... Interesting new research argues that hysteresis may affect not just the labor supply but also the rate of productivity growth.
If hysteresis is prevalent in the economy, then U.S. policymakers need to rethink their fiscal and monetary policy priorities. The effects of hysteresis may mean that economic recoveries need to run longer and hotter than previous thought in order to get workers back into the labor market or allow other resources to get back into full use.
The other set of open research questions that Yellen raises is the influence of “heterogeneity” on aggregate demand. In many models of the macroeconomy, households are characterized by a representative agent... In short, they are assumed to be homogeneous. As Yellen notes in her speech, overall home equity remained positive after the bursting of the housing bubble, so a representative agent would have maintained positive equity in their home.
Yet a wealth of research in the wake of the Great Recession finds that millions of households whose mortgages were “underwater” and didn’t have positive wealth—a big reason for the severity of the downturn. Ignoring this heterogeneity in the housing market and its effects on economic inequality seems like something modern macroeconomics needs to resolve. Economists are increasingly moving in this direction, but even more movement would very helpful.
Yellen raises other areas of inquiry in her speech, including better understanding how the financial system is linked to the real economy and how the dynamics of inflation are determined. ... As Paul Krugman has noted several times over the past several years, the Great Recession doesn’t seem to have provoked the same rethink of macroeconomics compared to the Great Depression, which ushered in Keynesianism, and the stagflation of the 1970s, which led to the ascendance of new classical economics. The U.S. economy is similarly dealing with a “new normal.” Macroeconomics needs to respond this reality.
Tuesday, October 11, 2016
Ricardian Equivalence, benchmark models, and academics response to the financial crisis: In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”
He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified. ...
Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. 
Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.
Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.
I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.
In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about... These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.
Sunday, October 02, 2016
How Seriously Should We Take the New Keynesian Model?: Nick Rowe continues his long twilight struggle to try to take the New Keynesian-DSGE seriously, to understand what the model says, and to explain what is really going on in the New Keynesian DSGE model to the world. I said that I think this is a Sisyphean task. Let me expand on that here...
In the basic New Keynesian model, you see, the central bank “sets the nominal interest rate” and that, combined with the inflation rate, produces the real interest rate that people face when they use their Euler equation to decide how much less (or more) than their income they should spend. When the interest rate high, saving to spend later is expensive and so people do less of it and spend more now. When the interest rate is low, saving to spend later is cheap and so people do more of it and spend less now.
But how does the central bank “set the nominal interest rate” in practice? What does it physically (or, rather, financially) do?
In a normal IS-LM model, there are three commodities:
- currently-produced goods and services,
- bonds, and
In a normal IS-LM model, the central bank raises the interest rate by selling some of the bonds it has in its portfolio for cash and burns the cash it thus collects (for cash is, remember, nothing but a nominal liability of the central bank). It thus creates an excess supply (at the previous interest rate) for bonds and an excess demand (at the previous interest rate) for cash. Those wanting to hold more cash slow down their purchases of currently-produced goods and services (thus creating an excess supply of currently produced goods and services) and sell some of their bonds (thus decreasing the excess supply of bonds). Those wanting to hold fewer bonds sell bonds for cash. Thus the interest rate rises, the flow quantity of currently-produced goods and services falls, and the sticky price of currently-produced goods and services stays where it is. Adjustment continues until supply equals demand for both money and bonds at the new equilibrium interest rate and at a new flow quantity of currently produced goods and services.
In the New Keynesian model?…
Nick Rowe: Cheshire Cats and New Keynesian Central Banks:
How can money disappear from a New Keynesian model, but the Central Bank still set a nominal rate of interest and create a recession by setting it too high?…
Ignore what New Keynesians say about their own New Keynesian models and listen to me instead. I will tell you how it is possible…. The Cheshire Cat has disappeared, but its smile remains. And its smile (or frown) has real effects. The New Keynesian model is a model of a monetary exchange economy, not a barter economy. The rate of interest is the rate of interest paid on central bank money, not on bonds. Raising the interest rate paid on money creates an excess demand for money which creates a recession. Or it makes no sense at all.
I will take “it makes no sense at all” for $2000, Alex…
Either there is a normal money-supply money-demand sector behind the model, which is brought out whenever it is wanted but suppressed whenever it raises issues that the model builders want ignored, or it makes no sense at all…
Wednesday, September 21, 2016
Monday, September 19, 2016
How to Build a Better Macroeconomics: Methodology Specifics I want to follow up on my comments about Paul Romer’s interesting recent piece by being more precise about how I believe macroeconomic research could be improved.
Macro papers typically proceed as follows:
- Question stated.
- Some reduced form analysis to "motivate" the question/answer.
- Question inputted into model. Model is a close variant of prior models grounded in four or five 1980s frameworks. The variant is generally based on introspection combined with some calibration of relevant parameters.
- Answer reported.
The problem is that the prior models have a host of key behavioral assumptions that have little or no empirical grounding. In this pdf, I describe one such behavioral assumption in some detail: the response of current consumption to persistent interest rate changes.
But there are many other such assumptions embedded in our models. For example, most macroeconomists study questions that depend crucially on how agents form expectations about the future. However, relatively few papers use evidence of any kind to inform their modeling of expectations formation. (And no, it’s not enough to say that Tom Sargent studied the consequences of one particular kind of learning in the late 1980s!) The point is that if your paper poses a question that depends on how agents form expectations, you should provide evidence from experimental or micro-econometric sources to justify your approach to expectation formation in your particular context.
So, I suggest the following would be a better approach:
- Thorough theoretical analysis of key mechanisms/responses that are likely to inform answer to question (perhaps via "toy" models?)
- Find evidence for ranges of magnitudes of relevant mechanisms/responses.
- Build and evaluate a range of models informed by this evidence. (Identification limitations are likely to mean that, given available data, there is a range of models that will be relevant in addressing most questions.)
- Range of answers to (1), given (4).
Should all this be done in one paper? Probably not. I suspect that we need a more collaborative approach to our questions - a team works on (2), another team works on (3), a third team works on (4) and we arrive at (5). I could readily see each step as being viewed as valuable contributions to economic science.
In terms of (3) - evidence - our micro colleagues can be a great source on this dimension. In my view, the most useful labor supply paper for macroeconomists in the past thirty years is this one - and it’s not written by a macroeconomist.
(If people know of existing papers that follow this approach, feel free to email me a reference at email@example.com.)
None of these ideas are original to me. They were actually exposited nearly forty years ago.The central idea is that individual responses can be documented relatively cheaply, occasionally by direct experimentation, but more commonly by means of the vast number of well-documented instances of individual reactions to well-specified environmental changes made available "naturally" via censuses, panels, other surveys, and the (inappropriately maligned as "casual empiricism") method of keeping one's eyes open.
I’m not totally on board with the author in what he says here. I'm a lot less enthralled by the value of “casual empiricism” in a world in which most macroeconomists mainly spend their time with other economists, but otherwise agree wholeheartedly with these words. And I probably see more of a role for direct experimentation than the author does. But those are both quibbles.
And these words from the same article seem even more apt:Researchers … will appreciate the extent to which … [this agenda] describes hopes for the future, not past accomplishments. These hopes might, without strain, be described as hopes for a kind of unification, not dissimilar in spirit from the hope for unification which informed the neoclassical synthesis. What I have tried to do above is to stress the empirical (as opposed to the aesthetic) character of these hopes, to try to understand how such quantitative evidence about behavior as we may reasonably expect to obtain in society as it now exists might conceivably be transformed into quantitative information about the behavior of imagined societies, different in important ways from any which have ever existed. This may seem an intimidatingly ambitious way to state the goal of an applied subfield of a marginally respectable science, but is there a less ambitious way of describing the goal of business cycle theory?
Somehow, macroeconomists have gotten derailed from this vision of a micro-founded unification and retreated into a hermetically sealed world, where past papers rely on prior papers' flawed foundations. We need to get back to the ambitious agenda that Robert Lucas put before us so many years ago.
(I admit that I'm cherry-picking like crazy from Lucas' 1980 classic JMCB piece. For example, one of Lucas' main points in that article was that he distrusted disequilibrium modeling approaches because they gave rise to too many free parameters. I don't find that argument all that relevant in 2016 - I think that we know more now than in 1980 about how micro-data can be fruitfully used to discipline our modeling of firm behavior. And I would suspect that Lucas would be less than fully supportive of what I write about expectations - but I still think I'm right!)
Sunday, September 04, 2016
Telling macro stories with micro: Don't let the equations, data, or jargon fool you, economists are avid storytellers. Our "stories" may not fit neatly in the seven universal plots but after awhile it's easy to spot some patterns. A good story paper in economics, according to David Romer, has three characteristics: a viewpoint, a lever, and a result.
Most blog or media coverage of an economics paper focuses on the result. Makes sense given the audience but buyer beware. Economists dissecting a paper spend more time on the lever, the how-did-they-get-the-result part. And coming up with new levers is a big chunk of research. The viewpoint--the underlying assumptions, the what's-central-to-the-story--tends to get short shrift. Of course, the viewpoint matters (often that's what defines a a story as economics), but it usually holds across many papers. Best to focus the new stuff.
Except when the viewpoint comes under scrutiny, then the stories can really change. ...
How much does micro matter for macro?
One long-standing viewpoint in economics is that changes in the macro-economy can largely be understood by studying changes in macro aggregates. Ironically, this viewpoint even survived macro's push to micro foundations with a "representative agent" stepping in as the missing link between aggregate data and micro theory. As a macro forecaster, I understand the value of the aggregates-only simplification. As an applied micro researcher, I am pretty sure it fails us from time to time. Thankfully, an ever-growing body of research and commentary is helping to identify times when differences at the micro level are relevant for macro outcomes. This is not new--issues of aggregation in macro go waaay back--but our levers, with rich, timely micro data and high-powered computation, are improving rapidly.
I focus in this post on differences in household behavior, particularly related to consumer spending, since that's the area I know best. And I want to discuss results from an ambitious new paper: "Macroeconomics and Household Heterogeneity" by Krueger, Mitman, and Perri. tldr: I am skeptical of their results, above all, the empirics, but I really like what they are trying to do, to shift the macro viewpoint. More on this paper below, but also want to set it in the context of macro storytelling. ...
There's quite a bit more.
Monday, August 15, 2016
Here is what I like and have found most useful about Dynamic Stochastic General Equilibrium (DSGE) models, also known as New Keynesian (NK), models. The original NK models were low dimensional – the simplest version reduces to a 3-equation model, while DSGE models are now typically much more elaborate. What I find attractive about these models can be stated in terms of the basic NK/DSGE model.
First, because it is a carefully developed, micro- founded model incorporating price frictions, the NK model makes it possible to incorporate in a disciplined way the various additional sectors, distortions, adjustment costs, and parametric detail found in many NK/DSGE models. Theoretically this is much more attractive than starting with a reduced form IS-LM model and adding features in an ad hoc way. (At the same time I still find ad hoc models useful, especially for teaching and informal policy analysis, and the IS-LM model is part of the macroeconomics cannon).
Second, and this is particularly important for my own research, the NK model makes explicit and gives a central role to expectations about future economic variables. The standard linearized three-equation NK model in output, inflation and interest rates has current output and inflation depending in a specified way on expected future output and inflation. The dependence of output on expected future output and future inflation comes through the household dynamic optimization condition, and the dependence of inflation on expected future inflation arises from the firm’s optimal pricing equation. The NK model thus places expectations of future economic variables front and center, and does so in a disciplined way.
Third, while the NK model is typically solved under rational expectations (RE), it can also be viewed as providing the temporary equilibrium framework for studying the system under relaxations of the RE hypothesis. I particularly favor replacing RE with boundedly rational adaptive learning and decision-making (AL). Incorporating AL is especially fruitful in cases where there are multiple RE solutions, and AL brings out many Keynesian features of the NK model that extend IS-LM. In general I have found micro-founded macro models of all types to be ideal for incorporating bounded rationality, which is most naturally formulated at the agent level.
Fourth, while the profession as a whole seemed to many of us slow to appreciate the implications of the NK model for policy during and following the financial crisis, this was not because the NK model was intrinsically defective (the neglect of financial frictions by most, though not all DSGE modelers, was also a deficiency in most textbook IS-LM models). This was really, I think, because many macro economists using NK models in 2007-8 did not fully appreciate the Keynesian mechanisms present in the model.
However, many of us were alert to the NK model fiscal policy implications during the crisis. For example, in Evans, Guse and Honkapohja (“Liquidity traps, learning and stagnation,” 2008, European Economic Review), using an NK model with multiple RE solutions because of the liquidity trap, we showed, using the AL approach to expectations, that when there is a very large negative expectations shock, fiscal as well as monetary stimulus may be needed, and indeed a temporary fiscal stimulus that is large enough and early enough may be critical for avoiding a severe recession or depression. Of course such an argument could have been made using extensions of the ad hoc IS-LM model, but my point is that this policy implication was ready to be found in the NK model, and the key results center on the primacy of expectations.
Finally, it should go without saying that NK/DSGE modeling should not be the one and only style. Most graduate-level core macro courses teach a wide range of macro models, and I see a diversity of innovations at the research frontier that will continue to keep macroeconomics vibrant and relevant.
Tuesday, August 09, 2016
Murky Macroeconomics: ...I realized something not too flattering about myself: I’m feeling nostalgic for 2011 or so.
Why? It was, of course, a terrible time for much of the world, and especially for anyone without a job. But for ... an economist ... it was a time of wonderful intellectual clarity. Liquidity-trap macroeconomics ... had become the story of the day. And the basic message of the models — that everything changes when you hit the zero lower bound — was being overwhelmingly confirmed by experience.
The thing is, it was all beautifully hard-edged: a crisp boundary at zero, a sharp change in the impact of monetary and fiscal policy when you hit that boundary. And the predictions we made came out consistently right.
But now things have gotten a bit, well, murky. The zero lower bound is not, it turns out, quite as hard a boundary as we thought. ...I’d be surprised if any central bank is willing to go much if at all below minus one percent — but it turns out to be a sort of a fuzzy no-man’s-land rather than a line that cannot be crossed.
More important, probably, is the fact that two of the major advanced economies — the US and, believe it or not, Japan — are arguably quite close to full employment. We don’t know how close... But you can no longer argue that supply limits are no longer relevant.
Correspondingly, you can also no longer argue with confidence that there can be no crowding out, because the Fed won’t raise rates. You can argue that it shouldn’t — and I would — but we are maybe, possibly, on our way out of the liquidity trap.
So we’re not in the simple, depressed-economy world of 2011 anymore. But here’s the thing: we’re not in what we used to call a normal macroeconomic situation either. Maybe we’re close to full employment, but maybe not, and that’s with near-zero interest rates; also, it’s all too easy to imagine adverse shocks in the near future, and not at all clear how the Fed could or would respond. We are, if you like, half-out of the liquidity trap, with one foot on dry land — but the other foot is still hanging over the edge, and it wouldn’t take much to topple us right back in.
What I would argue is that in this murky, fragile situation we should be conducting policy largely as if we were still in the trap — because we badly need to get both feet firmly on dry land with some distance between us and the quicksand. ... But it’s not the crystalline case we used to be able to make.
Still, we need to deal with this murky situation right, which means embracing the uncertainty as part of the argument. Make murkiness great again!
Thursday, June 30, 2016
The continued rigidity of wages in the United States: “Wage rigidity” is an important feature of many models of the macroeconomy...
Some research on wage rigidity challenges this assumption. Pointing to data on individual wage growth, some economists argue that the wages of new hires is more important. If wages are really rigid, then the inflation-adjusted wages of new hires won’t vary as recessions come and go. Yet these researchers can point to data showing the wages of new hires moving up during economic expansions and down during recessions. So perhaps wages are more flexible than some think.
Now comes a new paper that shows how the cyclical nature of the wages of new hires isn’t really evidence against wage rigidity. The working paper, by economists Mark Gertler of New York University, Christopher Huckfeldt of Cornell University, and Antonella Trigari of Bocconi University, was released earlier this month. The three economists’ major point is to show that looking at the wages of all new hires in the United States is lumping together two groups of workers with different experiences. There are new hires who were previously unemployed and then there are new hires who were previously employed. ...
What they find is that the trends in wages for these two different groups of new hires are clearly different. The wages for new hires from the unemployment line don’t vary much more over time than the wages of already employed workers. But the wages of new hires from the ranks of the already employed do vary. This phenomenon, however, is less about flexible wages and more about workers moving up the job ladder, which mostly only happens during economic expansions, and is the reason why wages for these new hires move with economic cycles.
Outside of the implications for macroeconomic models of the labor market during recessions, the results from Gertler, Huckfeldt, and Trigari are also a reminder of the effects an economic downturn can have on workers’ career earnings. Recessions hinder the hiring of already employed workers, which hurts their chances of climbing the job ladder and future wage gains. Downturns don’t just harm the workers who lose jobs, but also the ones who keep their jobs.
Tuesday, June 28, 2016
I have a new column:
Why the Public Has Stopped Paying Attention to Economist: The predictions from economists about the consequences of Brexit were widely ignored. That shouldn’t be surprising. In recent years the public has lost faith the in the economics profession.
One reason for the lack of faith is the failure to predict the Great Recession, but the public’s dismissal of macroeconomists is based upon more than the failure to foresee the dangers the housing bubble posed for the economy. It is also due to false promises about the benefits to the working class from globalization, tax cuts for the wealthy, and trade agreements – promises that were often used to support ideological and political goals or to serve special interests. ...
Sunday, June 19, 2016
The New Keynesian model is fairly pliable, and adding bells and whistles can help it to explain most of the data we see, at least after the fact. Does that mean we should be more confident in it its ability to "embody any useful principle," or less?:
... A famous example of different pictures of reality is the model introduced around AD 150 by Ptolemy (ca. 85—ca. 165) to describe the motion of the celestial bodies. ... In Ptolemy’s model the earth stood still at the center and the planets and the stars moved around it in complicated orbits involving epicycles, like wheels on wheels. ...
It was not until 1543 that an alternative model was put forward by Copernicus... Copernicus, like Aristarchus some seventeen centuries earlier, described a world in which the sun was at rest and the planets revolved around it in circular orbits. ...
So which is real, the Ptolemaic or Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true..., our observations of the heavens can be explained by assuming either the earth or the sun to be at rest. Despite its role in philosophical debates over the nature of our universe, the real advantage of the Copernican system is simply that the equations of motion are much simpler in the frame of reference in which the sun is at rest.... Elegance ... is not something easily measured, but it is highly prized among scientists because laws of nature are meant to economically compress a number of particular cases into one simple formula. Elegance refers to the form of a theory, but it is closely related to a lack of adjustable elements, since a theory jammed with fudge factors is not very elegant. To paraphrase Einstein, a theory should be as simple as possible, but not simpler. Ptolemy added epicycles to the circular orbits of the heavenly bodies in order that his model might accurately describe their motion. The model could have been made more accurate by adding epicycles to the epicycles, or even epicycles to those. Though added complexity could make the model more accurate, scientists view a model that is contorted to match a specific set of observations as unsatisfying, more of a catalog of data than a theory likely to embody any useful principle. ...
[S]cientists are always impressed when new and stunning predictions prove correct. On the other hand, when a model is found lacking, a common reaction is to say the experiment was wrong. If that doesn’t prove to be the case, people still often don’t abandon the model but instead attempt to save it through modifications. Although physicists are indeed tenacious in their attempts to rescue theories they admire, the tendency to modify a theory fades to the degree that the alterations become artificial or cumbersome, and therefore “inelegant.” If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model. ...
[Hawking, Stephen; Mlodinow, Leonard (2010-09-07). The Grand Design. Random House, Inc.. Kindle Edition.]
Saturday, June 11, 2016
Macroeconomics, Fantasy, Reality, and Intellectual Utility…: A very nice overview piece this morning from smart young whippersnapper Noah Smith:
Noah Smith: Economics Struggles to Cope With Reality: "Four different activities... go by the name of macroeconomics... But they actually have relatively little to do with each other.... Coffee-house macro.... Finance macro.... Academic macro.... Fed macro....
However, I think he has picked the wrong four. ...
Thursday, June 02, 2016
I am going to have to redo the videos and other materials for my online macroeconomics course that uses this text:
How to Teach Intermediate Macroeconomics after the Crisis?, by Olivier Blanchard: Having just concluded a seven-year run as chief economist of the International Monetary Fund, and having to rewrite the seventh edition of my undergraduate macroeconomics book (link is external) , I had to confront the issue: How should we teach macroeconomics to undergraduates after the crisis? Here are some of my conclusions (I shall focus here on the short and medium runs; it will take another blog to discuss how we should teach growth theory).
The Investment-Saving (IS) Relation The IS relation remains the key to understanding short-run movements in output. In the short run, the demand for goods determines the level of output. A desire by people to save more leads to a decrease in demand and, in turn, a decrease in output. Except in exceptional circumstances, the same is true of fiscal consolidation.
I was struck by how many times during the crisis I had to explain the “paradox of saving” and fight the Hoover-German line, “Reduce your budget deficit, keep your house in order, and don’t worry, the economy will be in good shape.” Anybody who argues along these lines must explain how it is consistent with the IS relation.
The demand for goods, in turn, depends on the rate at which people and firms can borrow (not the policy rate set by the central bank, more on this below) and on expectations of the future. John Maynard Keynes rightly insisted on the role of animal spirits. Uncertainty, pessimism, justified or not, decrease demand and can be largely self-fulfilling. Worries about future prospects feed back to decisions today. Such worries are probably the source of our slow recovery. (link is external)
The Liquidity Preference/Money Supply (LM) Relation The LM relation, in its traditional formulation, is the relic of a time when central banks focused on the money supply rather than the interest rate. ... The reality is now different. Central banks think of the policy rate as their main instrument and adjust the money supply to achieve it. Thus, the LM equation must be replaced, quite simply, by the choice of the policy rate by the central bank, subject to the zero lower bound. ... This change had already taken place in the new Keynesian models; it should make its way into undergraduate texts.
Integrating the Financial System into Macro Models If anything, the crisis has shown the importance of the financial system for macroeconomics. Traditionally, the financial system was given short shrift in undergraduate macro texts. The same interest rate appeared in the IS and LM equations; in other words, people and firms were assumed to borrow at the policy rate set by the central bank. We have learned, dearly, that this is not the case and that things go very wrong.
The teaching solution, in my view, is to introduce two interest rates, the policy rate set by the central bank in the LM equation and the rate at which people and firms can borrow, which enters the IS equation, and then to discuss how the financial system determines the spread between the two. I see this as the required extension of the traditional IS-LM model. A simple model of banks showing the role of capital, on the one hand, and the role of liquidity, on the other, can do the trick. Many of the issues that dominated the crisis, from losses and low capital to liquidity runs can be discussed and integrated into the IS-LM model. With this extension, one can show both the effects of shocks on the financial system and the way in which the financial system modifies the effects of other shocks on the economy.
(Getting Rid of) Aggregate Demand–Aggregate Supply Turning to the supply side, the contraption known as the aggregate demand–aggregate supply model should be eliminated. It is clunky and, for good reasons, undergraduates find it difficult to understand. Its main point is to show how output naturally returns to potential with no change in policy, through a mechanism that appears marginally relevant in practice..
These difficulties are avoided if one simply uses a Phillips Curve (PC) relation to characterize the supply side. ...
Again, this way of discussing the supply side is already standard in more advanced presentations and the new Keynesian model (although the Calvo specification used in that model, as elegant as it is, is arbitrarily constraining and does not do justice to the facts). It is time to integrate it into the undergraduate model.
The IS-LM-Phillips Curve Model Put together, these modified IS, LM, and PC relations can do a good job of explaining recent and current events. For example, financial dislocations lead to a large spread between the borrowing and policy rates. The zero lower bound (or as we have learned, the slightly negative lower bound) prevents the central bank from decreasing the policy rate by enough to maintain demand. Output falls. Inflation decreases, potentially to the point where it turns into deflation, increasing real interest rates, and making it even harder to return to potential output.
One can go much further. ...
Macroeconomics is a tremendously exciting subject. Most of what we taught before the crisis remains highly relevant. But it needs some dusting and updating. My hope is that a model along the lines above can contribute to it.
Rethinking Macro Policy: Progress or Confusion?: On April 15 and 16, 2015, the IMF hosted the third conference on “Rethinking Macroeconomic Policy.” I had initially chosen as the title and subtitle “Rethinking Macroeconomic Policy III. Down in the Trenches.” I thought of the first conference in 2011 as having identified the main failings of previous policies, the second conference in 2013 as having identified general directions, and this conference as a progress report. My subtitle was rejected by one of the co-organizers, Larry Summers. He argued that I was far too optimistic, that we were nowhere close to knowing where were going. Arguing with Larry is tough, so I chose an agnostic title and shifted to “Rethinking Macro Policy III: Progress or Confusion?” Where do I think we are today? I think both Larry and I are right. I do not say this for diplomatic reasons. We are indeed proceeding in the trenches. But where the trenches will eventually lead remains unclear. This is the theme I shall develop in these concluding remarks, focusing on macroprudential tools, monetary policy, and fiscal policy. ...
Brad DeLong responds: On this one--views of fiscal policy--put me down not for progress but for "confusion for $2000", Alex, for on this one I think the very sharp Olivier Blanchard has got it wrong. ...
Wednesday, April 06, 2016
“Networks and the Macroeconomy: An Empirical Exploration,” by Daron Acemoglu, Ufuk Akcigit, and William Kerr (this will be published in the NBER Macroeconomics Annual):
How Network Effects Hurt Economies, by Peter Dizikes, MIT News Office: When large-scale economic struggles hit a region, a country, or even a continent, the explanations tend to be big in nature as well.
Macroeconomists — who study large economic phenomena — often look for sweeping explanations of what has gone wrong, such as declines in productivity, consumer demand, or investor confidence, or significant changes in monetary policy.
But what if large-scale economic slumps can be traced to declines in relatively narrow industrial sectors? A newly published study co-authored by an MIT economist provides evidence that economic problems may often have smaller points of origin and then spread as part of a network effect.
“Relatively small shocks can become magnified and then become shocks you have to contend with [on a large scale],” says MIT economist Daron Acemoglu, one of the authors of a paper detailing the research.
The findings run counter to “real business cycle theory,” which became popular in the 1970s and holds that smaller, industry-specific effects tend to get swamped by larger, economy-wide trends.
More precisely, Acemoglu and his colleagues have found cases where industry-specific problems lead to six-fold declines in production across the U.S. economy as a whole. For example, for every dollar of value-added growth lost in the manufacturing industries because of competition from China, six dollars of value-added growth were lost in the U.S. economy as a whole.
The researchers also examined four different types of economic shocks to the U.S. economy that occurred over the years 1991-2009, and quantified the extent to which those problems spread “upstream” or “downstream” of the central industry in question — that is, whether the network effects more strongly hurt industrial suppliers or businesses that sell products and provide services to consumers.
All told, the researchers state in the paper, “Our results suggest that the transmission of various different types of shocks through economic networks and industry interlinkages could have first-order implications for the macroeconomy.” ...
Upstream or downstream
Acemoglu, Afcigit, and Kerr used manufacturing data from the National Bureau of Economic Analysis, and industry-specific data from the Bureau of Economic Analysis, to examine four economic shocks hitting the U.S. economy during that 1991-2009 period. Those were: the impact of export competition on U.S. manufacturing; changes in federal government spending, which affect areas such as defense manufacturing; changes in Total Factor Productivity; and variation in levels of patents coming from foreign industry.
As noted, the network effect of manufacturing competition with China made the overall economic shock about six times as great as it was to manufacturing alone. (This research built on previously published work by economists David Autor of MIT, David Dorn of the University of Zurich, and Gordon Hanson of the University of California at San Diego, sometimes in collaboration with Acemoglu and MIT graduate student Brendan Price.)
In studying changes in the levels of federal spending after 1992, the researchers found a network effect about three to five times as large as that on directly-affected firms alone.
The decline in Total Factor Productivity constituted a smaller economic shock but one with a larger network effect, of more than 15 times the initial impact. In the case of increased foreign patenting (another way of looking at corporate productivity), the researchers found a network effect similar to that of Total Factor Productivity.
The first two of these areas constitute demand-side shocks, affecting consumer demand for the products in question. The last two are supply-side shocks, affecting firms’ ability to be good at what they do.
One of the key findings of the study, which confirms and builds on existing theory, is that demand-side shocks spread almost exclusively “upstream” in economic networks, and supply-side shocks spread almost exclusively “downstream.” To see why, Acemoglu suggests, consider an auto manufacturer, which has parts suppliers upstream and is linked with auto dealers, repair shops, and other businesses downstream.
When auto demand drops, “It’s the suppliers [upstream] that get affected,” Acemoglu explains. “You’re going to cut the production of autos, and you buy less of each of the inputs,” or supplies.
Now suppose the supply changes, perhaps due to an increase in manufacturing efficiency, which makes cars cheaper. In that case, Acemoglu adds, “People who use auto as inputs will buy more of them” — picture a delivery company — “so that shock will get transmitted to the downstream industries.”
To be sure, it is widely understood that the auto industry, like almost every other industry, is situated within a larger economic network. Yet estimating the spillover effects of struggles within any given industry, in the quantitative form of the current study, is rarely done.
“Given the importance of this, it’s surprising how scant the evidence is,” Acemoglu says. ...
This could have policy implications: Proponents of government investment, such as the so-called stimulus bill of 2009, the American Recovery and Reinvestment Act, have contended that government spending creates a “multiplier effect” in terms of growth. Opponents of such legislation sometimes assert that government spending crowds out private investment and thus does not generate more growth than would otherwise occur. In theory, a more granular understanding of these network effects could help describe and define what a multiplier effect is, and in which industrial areas it may be the most pronounced. ...
Saturday, March 26, 2016
Reflections on Macroeconomics Then and Now: I am grateful to the National Association for Business Economics (NABE) for conferring the fourth annual NABE Paul A. Volcker Lifetime Achievement Award for Economic Policy on me, thereby allowing me the honor of following in the footsteps of Paul Volcker, Jean-Claude Trichet, and Alice Rivlin.1 The honor of receiving the award is enhanced by its bearing the name of Paul Volcker, a model citizen and public servant, and a giant in every sense among central bankers.
One thinks of many things on an occasion such as this one. My mind goes back first to growing up in a very small town in Zambia, then Northern Rhodesia, and to the surprise and delight my parents would have felt at seeing me standing where I am now. They would have been even more delighted that my girlfriend, Rhoda, whom I met when my parents moved to a bigger town in Zimbabwe, and I have been happily married for 50 years. But that is not the story I will tell today. Rather, I want to talk about our field, macroeconomics, and some of the lessons we have learned in the course of the last 55 years--and I say 55 years, because in 1961, at the end of my school years, on the advice of a friend, I read Keynes's General Theory for the first time.
Did I understand it? Certainly not. Was I captivated by it? Certainly, though "captured" is a more appropriate word than "captivated." Does it remain relevant? Certainly. Just a week ago I took it off the bookshelf to read parts of chapter 23, "Notes on Mercantilism, the Usury Laws, Stamped Money and Theories of Under-Consumption." Today that chapter would be headed "Protectionism, the Zero Lower Bound, and Secular Stagnation," with the importance of usury laws having diminished since 1936.
There is an old joke about our field--not the one about the one-handed economist, nor the one about "assume you have a can opener," nor the one that ends, "If I were you, I wouldn't start from here." Rather it's the one about the Ph.D. economist who returns to his university for his class's 50th reunion. He asks if he can see the most recent Ph.D. generals exam. After a while it is brought to him. He reads it carefully, looking perplexed, and then says, "But this is exactly the same as the exam I wrote over 50 years ago." "Ah yes," says the professor. "It is the same, but all the answers are different."
Is that really the case? Not really, though it is true to some extent in the realm of policy. To discuss the question of whether the answers to the questions of how to deal with macroeconomic policy problems have changed markedly over the past half-century or so, I will start by briefly sketching the structure of a basic macro model. The building blocks of this model are similar to those used in many macro models, including FRB/US, the Fed staff's large-scale model, and a variety of DSGE (dynamic stochastic general equilibrium) models used at the Fed and other central banks and by academic researchers.
The structure of the model starts with the standard textbook equation for aggregate demand for domestically produced goods, namely:2
- AD = C + I + G + NX;
- Next is the wage-price block, which is based on a wage or price Phillips curve. Okun's law is included to make the transition between output and employment;
- Monetary policy is described by a money supply or interest rate rule;
- The credit markets and financial intermediation are built off links between the policy interest rate and the rates of return on, and/or demand and supply functions for, other assets;
- The balance of payments and the exchange rate enter through the balance of payments identity, namely that the current account surplus must be equal to the capital account deficit, corrected for official intervention;
- Dynamics of stocks: There are dynamic equations for the capital stock, the stock of government debt, and the external debt.
When I was an undergraduate at the London School of Economics (LSE) between 1962 and 1965, we learned the IS-LM model, which combined the aggregate demand equation (1) with the money market equilibrium condition set out in (3). That was the basic understanding of the Keynesian model as crystallized by John Hicks, Franco Modigliani, and others, in which it was easy to add detail to the demand functions for private-sector consumption, C; for investment, I; for government spending, G; and for net exports. The Keynesian emphasis on aggregate demand and its determinants is one of the basic innovations of the Keynesian revolution, and one that makes it far easier to understand and explain what factors are determining output and employment.
Continuing down the list, on price and wage dynamics, the Phillips curve has flattened somewhat since the 1950s and 1960s.3 Further, the role of expectations of inflation in the Phillips curve has been developed far beyond what was understood when A.W. Phillips--who was a New Zealander, an LSE faculty member, and a statistician and former engineer--discovered what later became the Phillips curve. The difference between the short- and long-run Phillips curves, which is now a staple of textbooks, was developed by Milton Friedman and Edmund Phelps, and the effect of making expectations rational or model consistent was emphasized by Robert Lucas, whose islands model provided an imperfect information reason for a nonvertical short-run Phillips curve. In Okun's law, the Okun coefficient--the coefficient specifying how much a change in the unemployment rate affects output--appears to have declined over time. So has the trend rate of productivity growth, which is a critical determinant of future levels of per capita income.
In (3), the monetary equilibrium condition, the monetary policy decision was typically represented by the money stock at the LSE and perhaps also at the Massachusetts Institute of Technology (MIT) after the Keynesian revolution (after all, "L" represents the liquidity preference function and "M" the supply of money); now the money supply rule is replaced by an interest-rate setting rule, for instance a reaction function of some form, or by a calculated "optimal" policy based on a loss function.
The development of the flexible inflation-targeting approach to monetary policy is one of the major achievements of modern macroeconomics. Flexible inflation targeting allows for flexibility in the speed with which the monetary authority plans on returning to the target inflation rate, and is thereby close to the dual mandate that the law assigns to the Fed.
A great deal of progress has been made in developing the credit and financial intermediation block. As early as the 1960s, each of James Tobin, Milton Friedman, and Karl Brunner and Alan Meltzer wrote out models with more fully explicated financial sectors, based on demand functions for assets other than money. Later the demand functions were often replaced by pricing equations derived from the capital asset pricing model. Researchers at the Fed have been bold enough to add estimated term and risk premiums to the determination of the returns on some assets.4 They have concluded, inter alia, that the arguments we used to make about how easy it would be to measure expected inflation if the government would introduce inflation-indexed bonds failed to take into account that returns on bonds are affected by liquidity and risk premiums. This means that one of the major benefits that were expected from the introduction of inflation-indexed bonds (Treasury Inflation-Protected Securities, generally called TIPS), namely that they would provide a quick and reliable measure of inflation expectations, has not been borne out, and that we still have to struggle to get reasonable estimates of expected inflation.
As students, we included NX, net exports, in the aggregate demand equation, but we did not generally solve for the exchange rate, possibly because the exchange rate was typically fixed. Later, in 1976, Rudi Dornbusch inaugurated modern international macroeconomics--and here I'm quoting from a speech by Ken Rogoff--in his famous overshooting model.5 As globalization of both goods and asset markets intensified over the next 40 years, the international aspects of trade in goods and assets occupied an increasingly important role in the economies of virtually all countries, not least the United States, and in macroeconomics.
At the LSE, we took a course on the British economy from Frank Paish, whose lectures consisted of a series of charts, accompanied by narrative from the professor. He made a strong impression on me in a lecture in 1963, in which he said, "You see, it (the balance of payments deficit) goes up and it goes down, and it is clear that we are moving toward a balance of payments crisis in 1964." I waited and I watched, and the crisis appeared on schedule, as predicted. But Paish also warned us that forecasting was difficult, and gave us the advice "Never look back at your forecasts--you may lose your nerve." I pass that wisdom on to those of you who need it.
I remember also my excitement at being told by a friend in a more senior class about the existence of econometric models of the entire economy. It was a wonderful moment. I understood that economic policy would from then on be easy: All that was necessary was to feed the data into the model and work out at what level to set the policy parameters. Unfortunately, it hasn't worked out that way. On the use of econometric models, I think often of something Paul Samuelson once said: "I'd rather have Bob Solow's views than the predictions of a model. But I'd rather have Solow with a model than without one."
We learned a lot at the LSE. But wonderful as it was to be in London, and to meet people from all over the world for the first time, and to be able to travel to Europe and even to the Soviet Union with a student group, and to ski for the first time in my life in Austria, it gradually became clear to me that the center of the academic economics profession was not in London or Oxford or Cambridge, but in the United States.
There was then the delicate business of applying to graduate school. There was a strong Chicago tendency among many of the lecturers at the LSE, but I wanted to go to MIT. When asked why, I gave a simple answer: "Samuelson and Solow." Fortunately, I got into MIT and had the opportunity of getting to know Samuelson and Solow and other great professors. And I also met the many outstanding students who were there at the time, among them Robert Merton. I took courses from Samuelson and Solow and other MIT stars, and I wrote my thesis under the guidance of Paul Samuelson and Frank Fisher. From there, my first job was at the University of Chicago--and I understood that I was very lucky to have been able to learn from the great economists at both MIT and Chicago. Among the many things I learned at Chicago was a Milton Friedman saying: "Man may not be rational, but he's a great rationalizer," which is a quote that often comes to mind when listening to stock market analysts.
After four years at Chicago, I returned to the MIT Department of Economics, and thought that I would never leave--even more so when MIT succeeded in persuading Rudi Dornbusch, whom I had met when he was a student at Chicago, to move to MIT--thus giving him too the benefit of having learned his economics at both Chicago and MIT, and giving MIT the pleasure and benefit of having added a superb economist and human being to the collection of such people already present.
MIT was still heavily involved in developing growth theory at the time I was a Ph.D. student there, from 1966 to 1969. We students were made aware of Kaldor's stylized facts about the process of growth, presented in his 1957 article "A Model of Economic Growth." They were:
- The shares of national income received by labor and capital are roughly constant over long periods of time.
- The rate of growth of the capital stock per worker is roughly constant over long periods of time.
- The rate of growth of output per worker is roughly constant over long periods of time.
- The capital/output ratio is roughly constant over long periods of time.
- The rate of return on investment is roughly constant over long periods of time.
- The real wage grows over time.
Well, that was then, and many of the problems we face in our economy now relate to the changes in the stylized facts about the behavior of the economy: Every one of Kaldor's stylized facts is no longer true, and unfortunately the changes are mostly in a direction that complicates the formulation of economic policy.6
While the basic approach outlined so far remains valid, and can be used to address many macroeconomic policy issues, I would like briefly to take up several topics in more detail. Some of them are issues that have remained central to the macroeconomic agenda over the past 50 years, some have to my regret fallen off the agenda, and others are new to the agenda.
- Inflation and unemployment: Estimated Phillips curves appear to be flatter than they were estimated to be many years ago--in terms of the textbooks, Phillips curves appear to be closer to what used to be called the Keynesian case (flat Phillips curve) than to the classical case (vertical Phillips curve). Since the U.S. economy is now below our 2 percent inflation target, and since unemployment is in the vicinity of full employment, it is sometimes argued that the link between unemployment and inflation must have been broken. I don't believe that. Rather the link has never been very strong, but it exists, and we may well at present be seeing the first stirrings of an increase in the inflation rate--something that we would like to happen.
- Productivity and growth: The rate of productivity growth in the United States and in much of the world has fallen dramatically in the past 20 years. The table shows calculated rates of annual productivity growth for the United States over three periods: 1952 to 1973; 1974 to 2007; and the most recent period, 2008 to 2015. After having been 3 percent and 2.1 percent in the first two periods, the annual rate of productivity growth has fallen to 1.2 percent in the period since the start of the global financial crisis.
The right guide to thinking in this case is given by a famous Herbert Stein line: "The difference between a growth rate of 1 percent and 2 percent is 100 percent." Why? Productivity growth is a major determinant of long-term growth. At a 1 percent growth rate, it takes income 70 years to double. At a 2 percent growth rate, it takes 35 years to double. That is to say, that with a growth rate of 1 percent per capita, it takes two generations for per capita income to double; at a 2 percent per capita growth rate, it takes one generation for per capita income to double. That is a massive difference, one that would very likely have severe consequences for the national mood, and possibly for economic policy. That is to say, there are few issues more important for the future of our economy, and those of every other country, than the rate of productivity growth.
At this stage, we simply do not know what will happen to productivity growth. Robert Gordon of Northwestern University has just published an extremely interesting and pessimistic book that argues we will have to accept the fact that productivity will not grow in future at anything like the rates of the period before 1973. Others look around and see impressive changes in technology and cannot believe that productivity growth will not move back closer to the higher levels of yesteryear.7 A great deal of work is taking place to evaluate the data, but so far there is little evidence that data difficulties account for a significant part of the decline in productivity growth as calculated by the Bureau of Labor Statistics.8
- The ZLB and the effectiveness of monetary policy: From December 2008 to December 2015, the federal funds rate target set by the Fed was a range of 0 to 1/4 percent, a range of rates that was described as the ZLB (zero lower bound).9 Between December 2008 and December 2014, the Fed engaged in QE--quantitative easing--through a variety of programs. Empirical work done at the Fed and elsewhere suggests that QE worked in the sense that it reduced interest rates other than the federal funds rate, and particularly seems to have succeeded in driving down longer-term rates, which are the rates most relevant to spending decisions.
Critics have argued that QE has gradually become less effective over the years, and should no longer be used. It is extremely difficult to appraise the effectiveness of a program all of whose parameters have been announced at the beginning of the program. But I regard it as significant with respect to the effectiveness of QE that the taper tantrum in 2013, apparently caused by a belief that the Fed was going to wind down its purchases sooner than expected, had a major effect on interest rates.
More recently, critics have argued that QE, together with negative interest rates, is no longer effective in either Japan or in the euro zone. That case has not yet been empirically established, and I believe that central banks still have the capacity through QE and other measures to run expansionary monetary policies, even at the zero lower bound.
- The monetary-fiscal policy mix: There was once a great deal of work on the optimal monetary-fiscal policy mix. The topic was interesting and the analysis persuasive. Nonetheless the subject seems to be disappearing from the public dialogue; perhaps in ascendance is the notion that--except in extremis, as in 2009--activist fiscal policy should not be used at all. Certainly, it is easier for a central bank to change its policies than for a Treasury or Finance Ministry to do so, but it remains a pity that the fiscal lever seems to have been disabled.
- The financial sector: Carmen Reinhart and Ken Rogoff's book, This Time Is Different, must have been written largely before the start of the great financial crisis. I find their evidence that a recession accompanied by a financial crisis is likely to be much more serious than an ordinary recession persuasive, but the point remains contentious. Even in the case of the Great Recession, it is possible that the U.S. recession got a second wind when the euro-zone crisis worsened in 2011. But no one should forget the immensity of the financial crisis that the U.S. economy and the world went through following the bankruptcy of Lehman Brothers--and no one should forget that such things could happen again.
The subsequent tightening of the financial regulatory system under the Dodd-Frank Act was essential, and the complaints about excessive regulation and excessive demands for banks to hold capital betray at best a very short memory. We, the official sector and particularly the regulatory authorities, do have an obligation to try to minimize the regulatory and other burdens placed on the private sector by the official sector--but we have a no less important obligation to try to prevent another financial crisis. And we should also remember that the shadow banking system played an important role in the propagation of the financial crisis, and endeavor to reduce the riskiness of that system.
- The economy and the price of oil: For some time, at least since the United States became an oil importer, it has been believed that a low price of oil is good for the economy. So when the price of oil began its descent below $100 a barrel, we kept looking for an oil-price-cut dividend. But that dividend has been hard to discern in the macroeconomic data. Part of the reason is that as a result of the rapid expansion of the production of oil from shale, total U.S. oil production had risen rapidly, and so a larger part of the economy was adversely affected by the decline in the price of oil. Another part is that investment in the equipment and structures needed for shale oil production had become an important component of aggregate U.S. investment, and that component began a rapid decline. For these reasons, although the United States has remained an oil importer, the decrease in the world price of oil had a mixed effect on U.S. gross domestic product. There is reason to believe that when the price of oil stabilizes, and U.S. shale oil production reaches its new equilibrium, the overall effect of the decline in the price of oil will be seen to have had a positive effect on aggregate demand in the United States, since lower energy prices are providing a noticeable boost to the real incomes of households.
- Secular stagnation: During World War II in the United States, many economists feared that at the end of the war, the economy would return to high pre-war levels of unemployment--because with the end of the war, demobilization, and the massive reduction that would take place in the defense budget, there would not be enough demand to maintain full employment.
Thus was born or renewed the concept of secular stagnation--the view that the economy could find itself permanently in a situation of low demand, less than full employment, and low growth.10 That is not what happened after World War II, and the thought of secular stagnation was correspondingly laid aside, in part because of the growing confidence that intelligent economic policies--fiscal and monetary--could be relied on to help keep the economy at full employment with a reasonable growth rate.
Recently, Larry Summers has forcefully restated the secular stagnation hypothesis, and argued that it accounts for the current slowness of economic growth in the United States and the rest of the industrialized world. The theoretical case for secular stagnation in the sense of a shortage of demand is tied to the question of the level of the interest rate that would be needed to generate a situation of full employment. If the equilibrium interest rate is negative, or very small, the economy is likely to find itself growing slowly, and frequently encountering the zero lower bound on the interest rate.
Research has shown a declining trend in estimates of the equilibrium interest rate. That finding has become more firmly established since the start of the Great Recession and the global financial crisis.11 Moreover, the level of the equilibrium interest rate seems likely to rise only gradually to a longer-run level that would still be quite low by historical standards.
What factors determine the equilibrium interest rate? Fundamentally, the balance of saving and investment demands. Several trends have been cited as possible factors contributing to a decline in the long-run equilibrium real rate. One likely factor is persistent weakness in aggregate demand. Among the many reasons for that, as Larry Summers has noted, is that the amount of physical capital that the revolutionary information technology firms with high stock market valuations have needed is remarkably small. The slowdown of productivity growth, which as already mentioned has been a prominent and deeply concerning feature of the past six years, is another important factor.12 Others have pointed to demographic trends resulting in there being a larger share of the population in age cohorts with high saving rates.13 Some have also pointed to high saving rates in many emerging market countries, coupled with a lack of suitable domestic investment opportunities in those countries, as putting downward pressure on rates in advanced economies--the global savings glut hypothesis advanced by Ben Bernanke and others at the Fed about a decade ago.14
Whatever the cause, other things being equal, a lower level of the long-run equilibrium real rate suggests that the frequency and duration of future episodes in which monetary policy is constrained by the ZLB will be higher than in the past. Prior to the crisis, some research suggested that such episodes were likely to be relatively infrequent and generally short lived.15 The past several years certainly require us to reconsider that basic assumption. Moreover, recent experience in the United States and other countries has taught us that conducting monetary policy at the effective lower bound is challenging.16 And while unconventional policy tools such as forward guidance and asset purchases have been extremely helpful and effective, all central banks would prefer a situation with positive interest rates, restoring their ability to use the more traditional interest rate tool of monetary policy.17
The answer to the question "Will the equilibrium interest rate remain at today's low levels permanently?" is also that we do not know. Many of the factors that determine the equilibrium interest rate, particularly productivity growth, are extremely difficult to forecast. At present, it looks likely that the equilibrium interest rate will remain low for the policy-relevant future, but there have in the past been both long swings and short-term changes in what can be thought of as equilibrium real rates.
Eventually, history will give us the answer. But it is critical to emphasize that history's answer will depend also on future policies, monetary and other, notably including fiscal policy.
Well, are the answers all different than they were 50 years ago? No. The basic framework we learned a half-century ago remains extremely useful. But also yes: Some of the answers are different because they were not on previous exams because the problems they deal with were not evident fifty years ago. So the advice to potential policymakers is simple: Learn as much as you can, for most of it will come in useful at some stage of your career; but never forget that identifying what is happening in the economy is essential to your ability to do your job, and for that you need to keep your eyes, your ears, and your mind open, and with regard to your mouth--to use it with caution.
Many thanks again for this award and this opportunity to speak with you.
Bernanke, Ben S. (2005). "The Global Saving Glut and the U.S. Current Account Deficit," speech delivered at the Homer Jones Lecture, St. Louis, April 14.
Blanchard, Olivier (2014). "Where Danger Lurks: The Recent Financial Crisis Has Taught Us to Pay Attention to Dark Corners, Where the Economy Can Malfunction Badly," Finance and Development, vol. 51 (September), pp. 28-31.
-------- (2016). "The U.S. Phillips Curve: Back to the 60s? (PDF)" Policy Brief 16-1. Washington: Peterson Institute for International Economics, January.
Blanchard, Olivier, Eugenio Cerutti, and Lawrence Summers (2015). "Inflation and Activity--Two Explorations and Their Monetary Policy Implications (PDF)," IMF Working Paper WP/15/230. Washington: International Monetary Fund, November.
Blanchard, Olivier, and John Simon (2001). "The Long and Large Decline in U.S. Output Volatility (PDF)," Brookings Papers on Economic Activity, 1, pp. 135-74.
Brunner, Karl, and Allan H. Meltzer (1972). "Money, Debt, and Economic Activity," Journal of Political Economy, vol. 80 (September-October), pp.951-77.
Byrne, David M., John G. Fernald, and Marshall Reinsdorf (forthcoming). "Does the United States Have a Productivity Problem or a Measurement Problem?" Brookings Papers on Economic Activity.
Caballero, Ricardo J., Emmanuel Farhi, and Pierre-Olivier Gourinchas (2008). "An Equilibrium Model of 'Global Imbalances' and Low Interest Rates," American Economic Review, vol. 98 (1), pp. 358-93.
Daly, Mary C., John G. Fernald, Òscar Jordà, and Fernanda Nechio (2014). "Output and Unemployment Dynamics (PDF)," Working Paper Series 2013-32. San Francisco: Federal Reserve Bank of San Francisco, November.
-------- (2014). "Interpreting Deviations from Okun's Law," FRBSF Economic Letter 2014-12. San Francisco: Federal Reserve Bank of San Francisco.
D'Amico, Stefania, Don H. Kim, and Min Wei (2014). "Tips from TIPS: The Informational Content of Treasury Inflation-Protected Security Prices (PDF)," Finance and Economics Discussion Series 2014-24. Washington: Board of Governors of the Federal Reserve System, January.
Dornbusch, Rudiger (1976). "Expectations and Exchange Rate Dynamics," Journal of Political Economy, vol. 84 (December), pp. 1161-76.
Dornbusch, Rudiger, Stanley Fischer, and Richard Startz (2014). Macroeconomics, 12th ed. New York: McGraw-Hill Education.
Fischer, Stanley (forthcoming). "Monetary Policy, Financial Stability, and the Zero Lower Bound," American Economic Review (Papers and Proceedings).
Friedman, Milton (1968). "The Role of Monetary Policy," American Economic Review, vol. 58 (March), pp. 1-17.
Gordon, Robert J. (2014). "The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections," NBER Working Paper Series 19895. Cambridge, Mass.: National Bureau of Economic Research, February.
-------- (2016). The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War. Princeton, N.J.: Princeton University Press.
Hall, Robert E. (2014). "Quantifying the Lasting Harm to the U.S. Economy from the Financial Crisis," in Jonathan Parker and Michael Woodford, eds., NBER Macroeconomics Annual 2014, vol. 29. Chicago: University of Chicago Press.
Hamilton, James D., Ethan S. Harris, Jan Hatzius, and Kenneth D. West (2015). "The Equilibrium Real Funds Rate: Past, Present and Future," NBER Working Paper Series 21476. Cambridge, Mass.: National Bureau of Economic Research, August.
Hicks, John R. (1937). "Mr. Keynes and the 'Classics': A Suggested Interpretation," Econometrica, vol. 5 (April), pp. 147-59.
Johannsen, Benjamin K., and Elmar Mertens (2016). "The Expected Real Interest Rate in the Long Run: Time Series Evidence with the Effective Lower Bound," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, February 9.
Jones, Charles I., and Paul M. Romer (2010). "The New Kaldor Facts: Ideas, Institutions, Population, and Human Capital," American Economic Journal: Macroeconomics, vol. 2 (January), pp. 224-45.
Kaldor, Nicholas (1957). "A Model of Economic Growth," Economic Journal, vol. 67 (December), pp. 591-624.
Keynes, John Maynard (1936). The General Theory of Employment, Interest and Money. London: Macmillan.
Kiley, Michael T. (2015). "What Can the Data Tell Us about the Equilibrium Real Interest Rate? (PDF)" Finance and Economics Discussion Series 2015-077. Washington: Board of Governors of the Federal Reserve System, August.
Knotek, Edward S., II (2007). "How Useful Is Okun's Law? (PDF)" Federal Reserve Bank of Kansas City, Economic Review, Fourth Quarter, pp. 73-103.
Laubach, Thomas, and John C. Williams (2003). "Measuring the Natural Rate of Interest," Review of Economics and Statistics, vol. 85 (November), pp. 1063-70.
Lucas, Robert E., Jr. (1972). "Expectations and the Neutrality of Money," Journal of Economic Theory, vol. 4 (April), pp. 103-24.
Mendoza, Enrique G., Vincenzo Quadrini, and José-Víctor Ríos-Rull (2009). "Financial Integration, Financial Development, and Global Imbalances," Journal of Political Economy, vol. 117 (3), pp. 371-416.
Modigliani, Franco (1944). "Liquidity Preference and the Theory of Interest and Money," Econometrica, vol. 12 (January), pp. 45-88.
Mokyr, Joel, Chris Vickers, and Nicolas L. Ziebarth (2015). "The History of Techonological Anxiety and the Future of Economic Growth: Is This Time Different?" Journal of Economic Perspectives, vol. 29 (Summer), pp. 31-50.
Obstfeld, Maurice, and Kenneth Rogoff (1996). Foundations of International Macroeconomics. Cambridge, Mass.: MIT Press.
Okun, Arthur M. (1962). "Potential GNP: Its Measurement and Significance," Proceedings of the Business and Economics Statistics Section of the American Statistical Association, pp. 98-104.
Phelps, Edmund S. (1967). "Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time," Economica, vol. 34 (August), pp. 254-81.
Reifschneider, David, and John C. Williams (2000). "Three Lessons for Monetary Policy in a Low-Inflation Era," Journal of Money, Credit, and Banking, vol. 32 (November), pp. 936-66.
Reinhart, Carmen M., and Kenneth S. Rogoff (2009). This Time Is Different: Eight Centuries of Financial Folly. Princeton, N.J.: Princeton University Press.
Rogoff, Kenneth (2001). "Dornbusch's Overshooting Model after Twenty-Five Years (PDF)," speech delivered at the Mundell-Fleming Lecture, Second Annual Research Conference, International Monetary Fund, Washington, November 30 (revised January 22, 2002).
Solow, Robert M. (2004). "Introduction: The Tobin Approach to Monetary Economics," Journal of Money, Credit, and Banking, vol. 36 (August), pp. 657-63.
Stock, James H., and Mark W. Watson (2003). "Has the Business Cycle Changed and Why?" NBER Macroeconomics Annual 2002, vol. 17 (January).
Tobin, James (1969). "A General Equilibrium Approach to Monetary Theory (PDF)," Journal of Money, Credit, and Banking, vol. 1 (February), pp. 15-29.
U.S. Executive Office of the President, Council of Economic Advisors (2015). Long-Term Interest Rates: A Survey (PDF). Washington: EOP.
Williams, John C. (2013). "A Defense of Moderation in Monetary Policy (PDF)," Working Paper Series 2013-15. San Francisco: Federal Reserve Bank of San Francisco, July.
Woodford, Michael (2010). "Financial Intermediation and Macroeconomic Analysis," Journal of Economic Perspectives, vol. 24 (Fall), pp. 21-44.
1. I am grateful to David Lopez-Salido, Andrea Ajello, Elmar Mertens, Stacey Tevlin, and Bill English of the Federal Reserve Board for their assistance. Views expressed are mine, and are not necessarily those of the Federal Reserve Board or the Federal Open Market Committee.
2. A fuller description of the equations is contained in the appendix.
3. See Blanchard (2016).
4. See D'Amico, Kim, and Wei (2014).
5. See Dornbusch (1976) and Rogoff (2001).
6. See Jones and Romer (2010).
7. See, for instance, Mokyr, Vickers, and Ziebarth (2015).
8. See Byrne, Fernald, and Reinsdorf (forthcoming).
9. Inside the Fed, the range of 0 to 1/4 percent is generally called the ELB, the effective lower bound.
10. I am distinguishing in this section between secular stagnation as being caused by a deficiency of aggregate demand and another view, that output growth will be very slow in future because productivity growth will be very low. The view that future productivity growth will be very low has already been discussed, with the conclusion that we do not have a good basis for predictions of its future level, and that we simply do not know whether future productivity growth will be extremely low or higher than it has been recently. There is no shortage of views on this issue among economists, but the views to some extent appear to depend on whether the economist making the prediction is an optimist or a pessimist.
11. This research includes recent work by Johannsen and Mertens (2015) and Kiley (2015) that uses extensions of the original Laubach and Williams (2003) framework. An international perspective on medium-to-long-run real interest rates is provided by U.S. Executive Office of the President (2015). Reinhart and Rogoff (2009) and Hall (2014) discuss the long-lived effects of financial crises on economic performance. See also Hamilton and others (2015). I have, in addition, drawn on Fischer (forthcoming).
12. It is also a major factor explaining the phenomenon of the economy's impressive performance on the jobs front during a period of historically slow growth.
13. See, for instance, Gordon (2014, 2016).
14. See Bernanke (2005). See also the recent work by Caballero, Farhi, and Gourinchas (2008); and Mendoza, Quadrini, and Rios-Rull (2009).
15. See, for instance, Reifschneider and Williams (2000), Blanchard and Simon (2001), and Stock and Watson (2003).
16. For a discussion of various issues reviewed by the Federal Open Market Committee in late 2008 and 2009 regarding the complications of unconventional monetary policy at the ZLB, see the set of staff memos on the Board's website.
17. See Williams (2013).
Tuesday, March 22, 2016
MMT and mainstream macro: There were a lot of interesting and useful comments on my last post on MMT, plus helpful (for me) follow-up conversations. Many thanks to everyone concerned for taking the time. Before I say anything more let me make it clear where I am coming from. I’m on the same page as far as policy’s current obsession with debt is concerned. Where I seem to differ from some who comment on my blog, people who say they are following MMT, is whether you need to be concerned about debt when monetary policy is not constrained by the Zero Lower Bound. I say yes, they say no, but for reasons I could not easily understand.
This was the point of the ‘nothing new’ comment. It was not meant to be a put down. It was meant to suggest that a mainstream economist like myself could come to some of the same conclusions as MMT writers, and more to the point, just because I was a mainstream economist does not mean I misunderstood how government financing works. It was because I was getting comments from MMT followers that seemed nonsensical to me, but which should not have been nonsensical because the basics of MMT are understandable using mainstream theory. ...
What mainstream theory says is that some combination of monetary and fiscal policy can always end a recession caused by demand deficiency. Full stop: no ifs or buts. That is why we had fiscal expansion in 2009 in the US, UK, Germany, China and elsewhere. The contribution of some influential mainstream economists to this switch from fiscal stimulus to austerity in 2010 was minor at most, and to imagine otherwise does nobody any favours. The fact that policymakers went against basic macro theory tells us important things about the transmission mechanism of economic knowledge, which all economists have to address.
Yes, Expansionary Fiscal Policy in the North Atlantic Would Solve Many of Our Problems. Why Do You Ask?: ... In my view, the economics of Abba Lerner—what is now called MMT—is not always right: It is not always possible for the government to spend freely to attain full employment, use monetary policy to keep the debt under control, and rely on rising inflation as the only signal needed of whether and when policy needs to be tightened. Why not? Because it is possible that the bond market can get itself into an unsustainable position, in which underlying inflationary pressures are masked until it is too late to rebalance government finances without a financial crisis.
But, in my view, right now the economics of Abba Lerner is 100% correct. The U.S. (and Europe!) should use expansionary fiscal policy to rebalance the economy at full employment and potential output. And interest rates are so low that doing so does not require any additional monetary policy steps to keep the debt under control.
Japan, alas, confronts us with a difficult and much more devilish program of economic policy. Partial and nearly painless debt repudiation via inflation and financial repression seems to me to be the best way forward—if that can be attained. But more on that anon.
Tuesday, February 02, 2016
Not much out there to excerpt and blog, so I threw down a few thoughts for you to tear apart:
I hear frequently that economics needs to change, and it has, at least in the questions we ask. Twenty years go, the dominant conversation in economics was about the wonder of markets. We needed to free the banking system from regulations so it could do its important job of turning saving into productive investment unfettered by government interference. Trade barriers needed to come down to make everyone better off. There was little need to worry about monopoly power, markets are contestable and the problem will take care of itself. Unions simply get in the way of our innovative, dynamic economy and needed to be broken so the market could do its thing and make everyone better off. Inequality was a good thing, it created the right incentives for people to work hard and try to get ahead, and the markets would ensure that everyone, from CEOs on down, would be paid according to their contribution to society. The problem wasn't that the markets somehow distributed goods unfairly, or at least in a way that is at odds with marginal productivity theory, it was that some workers lacked the training to reap higher rewards. We simply needed to prepare people better to compete in modern, global markets, there was nothing fundamentally wrong with markets themselves. The move toward market fundamentalism wasn't limited to Republicans, Democrats joined in too.
That view is changing. Inequality has burst onto the economics research scene. Is rising inequality an inevitable feature of capitalism? Does the system reward people fairly? Can inequality actually inhibit economic growth? Not so long ago, the profession ignored these questions. Similarly for the financial sector. The profession has moved from singing the praises of the financial system and its ability to channel savings into the most productive investments to asking whether the financial sector produces as much value for society as was claimed in the past. We now ask whether banks are too big and powerful, whereas in the past that size was praised as a sign of how super-sized banks can do super-sized things for the economy, and compete with banks around the world. We have gone from saying that the shadow banking system can self regulate as it provides important financial services to homeowners and businesses to asking what types of regulation would be best. Economists used to pretty much ignore the financial sector altogether. It was a black box that simply turned S (saving) into I (investment), and did so efficiently, and there was no need to get into the details. Our modern financial system couldn't crash like those antiquated systems that were around during and before the Great Depression. There was no need to include it in our macro models, at least not in any detail, or even ask questions about what might happen if there was a financial crisis.
There are other changes too. Economists now question whether markets reward labor according to changes in productivity. Why is it that wages have stagnated even as worker productivity has gone up? Is it because bargaining power is asymmetric in labor markets, with firms having the advantage? What's the best way to elevate the working class? In the past, an argument was made that the best way to help everyone is to cut taxes for the wealthy, and all the great things they would do with the extra money and the incentives that tax cuts bring would trickle down and help the working class. That didn't happen and although there are still echoes of this argument on the political right, the questions have certainly changed. Much of the current research agenda in economics is devoted to understanding why wage income has stagnated for most people, and how to fix it. We've moved beyond "technology is the problem and better education is the answer" to asking whether the market system itself, and the market failures that come with it (including political influence over policy), has something to do with this outcome.
Fiscal policy is another example of change within the profession. Twenty years ago, nobody, well hardly anyone, was doing research on the impact of fiscal policy and its use as a countercyclical policy instrument. All of the focus was on monetary policy. Fiscal policy would only be needed in a severe recession, and that wouldn't happen in our modern economy, and in any case it wouldn't work (not everyone believed fiscal policy was ineffective, but many did). That has changed. Fiscal policy is now an integral component of many modern DSGE models, and -- surprise -- the models do not tell us fiscal policy is ineffective. Quite the opposite, it works well in deep recessions (though near full employment its effectiveness wanes).
Monetary policy has also come under scrutiny. In the past, the Taylor rule was praised as responsible for the Great Moderation. We had discovered the key to a stable economy. But the Great Recession changed that. We now wonder if other policy rules might serve as a better guidepost (e.g. nominal GDP targeting), we ask about negative interest rates, unconventional policy, all sorts of questions that were hardly asked or even imagined not so long ago. We wonder about regulation of the financial sector, and how to do it correctly (in the past, it was about how to remove regulations correctly).
I don't mean to suggest that economics is now on the right track. The old guard is still there, and still influential. But it's hard to deny that the questions we are asking have gone through a considerable evolution since the onset of the recession, and when questions change, new models and new tools are developed to answer them. The models do not come first -- models aren't built in search of questions, models are built to answer questions -- and the fact that we are asking new (and in my view much better) questions is a sign of further change to come.
Saturday, January 30, 2016
Daron Acemoglu, Ufuk Akcigit, and William Kerr:
Networks and macroeconomic shocks, , VoxEU: How shocks reverberate throughout the economy has been a central question in macroeconomics. This column suggests that input-output linkages can play an important role in this issue. Supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The authors also find that the initial impact of an industry shock can be substantially amplified due to input-output linkages.
How shocks propagate through the economy and contribute to fluctuations has been one of the central questions of macroeconomics. We argue that a major mechanism for such propagation is input-output linkages. Through input-output chains, shocks to one industry can influence ‘downstream’ industries that buy inputs from the affected industry, as well as ‘upstream’ industries that produce inputs for the affected industry. These interlinkages can propagate and potentially amplify the initial shock to further firms and industries not directly affected, influencing the macro economy to a much greater extent than the original shock could do on its own.
The significance of the idea that a shock to one firm or disaggregated industry could be a major contributor to economic fluctuations was downplayed in Lucas’ (1977) famous essay on business cycles. Lucas suggested that due to the law of large numbers, idiosyncratic shocks to individual firms should cancel each other out when considering the economy in the aggregate, and therefore the broader impact should not be substantial. Recent research, however, has questioned this perspective. For example, Gabaix (2011) shows that when the firm size distribution has very fat tails, the power of the law of large numbers is diminished and shocks to large firms can overwhelm parallel shocks to small firms, allowing such shocks to have a substantial impact on the economy at large. Acemoglu et al. (2012) show how microeconomic shocks can be at the root of macroeconomic fluctuations when the input-output structure of an economy exhibits sufficient asymmetry in the role of some disaggregated industries as (major) suppliers to others.
In Acemoglu et al. (2016), we empirically document the role of input-output linkages as a mechanism for the transmission of industry-level shocks to the rest of the economy. Our approach differs from previous research in two primary ways.
- First, whereas much prior work has focused on the medium-term implications of such network effects (e.g. over more than a decade), we emphasise the influence of these networks on short-term business cycles (e.g. over 1-3 years).
- Second, we begin to separate types of shocks to the economy and the differences in how they propagate.
We build a model that predicts that supply-side (e.g. productivity, innovation) shocks primarily propagate downstream, whereas demand-side shocks (e.g. trade, government spending) propagate upstream. For example, a productivity shock to the tire industry will tend to strongly affect the downstream automobile industry, while a shock to government spending in the car industry will reverberate upstream to the tire industry. We then demonstrate these findings empirically using four historical examples of industry-level shocks, two on the demand side and two on the supply side, and confirm the predictions of the model.
Model and prediction
We model an economy building on Long and Plosser (1983) and Acemoglu et al. (2012), in which each firm produces goods that are either consumed by other firms as inputs or sold in the final goods sector. The model predicts that supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The total impact of these shocks – taking into account that customers of customers will be also affected in response to supply-side shocks, and suppliers of suppliers will also be affected in response to demand-side shocks – is conveniently summarised by the Leontief inverse that played a central role in traditional input-output analysis.
The intuition behind the asymmetry in propagation for supply versus demand shocks relates to the Cobb-Douglas form of the production function and preferences. If productivity in a given industry is lowered by a shock, firms in that industry will produce fewer goods and the price of their goods will rise. Due to the Cobb-Douglas structure, these effects cancel each other out for upstream firms, leaving them unaffected, while downstream firms feel the increase in prices and consequently lower their overall production. On the other hand, if demand in a certain industry increases, firms in that industry increase production, necessitating a corresponding increase in input production by upstream firms. Because of constant returns to scale, however, the increased demand does not affect prices, and so downstream firms are not changed.
We also incorporate into the model geographic spillovers, showing that shocks in a particular industry will also influence industries that tend to be concentrated in the same area, as shown empirically by Autor et al. (2013) and Mian and Sufi (2014). The idea is that a shock to the first industry will influence local demand generally, and therefore will change demand, output, and employment for other local producers.
We test the model’s prediction by examining the implications of four shocks: changes in imports from China; changes in federal government spending; total factor productivity (TFP) shocks; and productivity shocks coming from foreign industry patents. The first two are demand-side shocks; the latter two affect the supply side. For each of these shocks, we show the effects on directly impacted industries as well as upstream and downstream effects. Our core industry-level data is taken from the NBER-CES Manufacturing Industry Database for the years 1991-2009, while input-output linkages were drawn from the Bureau of Economic Analysis’ 1992 Input-Output Matrix and the 1991 County Business Patterns Database.
For brevity we focus here on the first example, where changes in imports from China influence the demand in affected industries. Of course, rising import penetration in the US for a given industry could be endogenous and connected to other factors, such as sagging US productivity growth. We therefore instrument import penetration from China to the US with rising trade from China to eight non-US countries relative to the industry’s market size in the US, following Autor et al. (2013) and Acemoglu et al. (2015). Chinese imports to other countries can be taken as exogenous metrics of the rise of China in trade over the last two decades.
The empirics confirm the predictions of our model. A one standard-deviation increase in imports from China reduces the affected industry’s value added growth by 3.4%, while a similar shock to consumers of that industry’s products leads to a 7.6% decline.
- In other words, the upstream effect is nearly twice as large as the effect on the directly hit industry in a basic regression.
- Downstream effects, on the other hand, are of opposite sign and do not change in a statistically significant manner, confirming the model’s prediction.
Figure 1 shows the impulse response function when our framework is adjusted to allow for lags and measure multipliers. Again, a one standard-deviation shock to value added through trade produces network effects that are much greater than the own effects on the industry.
- We calculate that the effect of a shock to one industry on the entire economy is over six times as large as the effect on the industry itself, due to input-output linkages.
Similar effects are found for employment, and the findings are shown to be robust under many different specification checks.
Figure 1. Response to one SD value-add shock from Chinese imports
The other three shocks – changes in government spending, TFP shocks and foreign patenting shocks – also broadly support the model’s predictions, with the first leading to upstream effects and the latter two leading to downstream effects. Similarly, extensions quantify that geographical proximity facilitates the propagation of the shocks, particularly those on the demand side.
Shocks to particular industries can reverberate throughout the economy through networks of firms or industries that supply each other with inputs. Our work shows that these shocks are indeed powerfully transmitted through the input-output chain of the economy, and their initial impact can be substantially amplified. These findings open the way to a systematic investigation of the role of input-output linkages in underpinning rapid expansions and deep recessions, especially once we move away from simple, fully competitive models of the macro economy.
Acemoglu, D, U Akcigit, and W Kerr (2016), “Networks and the Macroeconomy: An Empirical Exploration”, NBER Macroeconomics Annual, forthcoming. NBER Working Paper 21344.
Acemoglu, D, V Carvalho, A Ozdaglar, and Al Tahbaz-Salehi (2012), “The Network Origins of Aggregate Fluctuations”, Econometrica, 80:5, 1977-2016.
Acemoglu, D, D Autor, D Dorn, G Hanson, and B Price (2015), “Import Competition and the Great U.S. Employment Sag of the 2000s”, Journal of Labor Economics, 34(S1), S141-S198.
Autor, D, D Dorn, and G Hanson (2013), “The China Syndrome: Local Labor Market Effects of Import Competition in the United States”, American Economic Review, 103:6, 2121-2168.
Gabaix, X (2011), “The Granular Origins of Aggregate Fluctuations”, Econometrica, 79, 733-772.
Long, J and C Plosser (1983), “Real Business Cycles”, Journal of Political Economy, 91:1, 39-69.
Lucas, R (1977), “Understanding Business Cycles”, Carnegie Rochester Conference Series on Public Policy, 5, 7-29.
Mian, A and A Sufi (2014), “What Explains the 2007-2009 Drop in Employment”, Econometrica, 82:6, 2197-2223.
Wednesday, January 13, 2016
Is mainstream academic macroeconomics eclectic?: For economists, and those interested in macroeconomics as a discipline
Eric Lonergan has a short little post that is well worth reading..., it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.
If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. ... My answer is yes and no.
Let’s take the five ‘schools’ that Eric talks about. ... Indeed the variety of models that academic macro currently uses is far wider than this.
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really... This is because these models (unlike those of 40+ years ago) use a common language. ...
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).
It also means that the range of issues that models (DSGE models) can address is also huge. ...
The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.
You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.
I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world. ...
The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.
Validity of the Neo-Fisherian Hypothesis: Warning: Super-Technical Material Follows
The neo-Fisherian hypothesis is as follows: If the central bank commits to peg the nominal interest rate at R, then the long-run level of inflation in the economy is increasing in R. Using finite horizon models, I show that the neo-Fisherian hypothesis is only valid if long-run inflation expectations rise at least one for one with the peg R. However, in an infinite horizon model, the neo-Fisherian hypothesis is always true. I argue that this result indicates why macroeconomists should use finite horizon models, not infinite horizon models. See this linked note and my recent NBER working paper for technical details.
In any finite horizon economy, the validity of the neo-Fisherian hypothesis depends on how sensitive long-run inflation expectations are to the specification of the interest rate peg.
- If long-run inflation expectations rise less than one-for-one (or fall) with the interest rate peg, then the neo-Fisherian hypothesis is false.
- If long-run inflation expectations rise at least one-for-one with the interest rate peg, then the neo-Fisherian hypothesis is true.
Intuitively, when the peg R is high, people anticipate tight future monetary policy. The future tightness of monetary policy pushes down on current inflation. The only way to offset this effect is for long-run inflation expectations to rise sufficiently in response to the peg.
In contrast, in an infinite horizon model, the neo-Fisherian hypothesis is valid - but only because of an odd discontinuity. As the horizon length converges to infinity, the level of inflation becomes infinitely sensitive to long-run inflation expectations. This means that, for almost all specifications of long-run inflation expectations, inflation converges to infinity or negative infinity as the horizon converges to infinity. Users of infinite horizon models typically discard all of these limiting “infinity” equilibria by setting the long-run expected inflation rate to be equal to the difference between R and r*. In this way, the use of an infinite horizon - as opposed to a long but finite horizon - creates a tight implicit restriction on the dependence of long-run inflation expectations on the interest rate peg
To summarize: The validity of the neo-Fisherian hypothesis depends on an empirical question: how do long-run inflation expectations depend on the central bank's peg? This empirical question is eliminated when we use infinite horizon models - but this is a reason not to use infinite horizon models.
In case you missed this from George Evans and Bruce McGough over the holidays (on learning models and the validity of the Neo-Fisherian Hyposthesis, also "super-technical"):
I've been surprised that none of the Neo-Fisherians have responded.
Thursday, January 07, 2016
Confidence as a political device: This is a contribution to the discussion about models started by Krugman, DeLong and Summers, and in particular to the use of confidence. (Martin Sandbu has an excellent summary, although as you will see I think he is missing something.) The idea that confidence can on occasion be important, and that it can be modeled, is not (in my view) in dispute. For example the very existence of banks depends on confidence (that depositors can withdraw their money when they wish), and when that confidence disappears you get a bank run.
But the leap from the statement that ‘in some circumstances confidence matters’ to ‘we should worry about bond market confidence in an economy with its own central bank in the middle of a depression’ is a huge one...
When people invoke the idea of confidence, other people (particularly economists) should be automatically suspicious. The reason is that it frequently allows those who represent the group whose confidence is being invoked to further their own self interest. The financial markets are represented by City or Wall Street economists, and you invariably see market confidence being invoked to support a policy position they have some economic or political interest in. Bond market economists never saw a fiscal consolidation they did not like, so the saying goes, so of course market confidence is used to argue against fiscal expansion. Employers drum up the importance of maintaining their confidence whenever taxes on profits (or high incomes) are involved. As I argue in this paper, there is a generic reason why financial market economists play up the importance of market confidence, so they can act as high priests. (Did these same economists go on about the dangers of rising leverage when confidence really mattered, before the global financial crisis?)
The general lesson I would draw is this. If the economics point towards a conclusion, and people argue against it based on ‘confidence’, you should be very, very suspicious. You should ask where is the model (or at least a mutually consistent set of arguments), and where is the evidence that this model or set of arguments is applicable to this case? Policy makers who go with confidence based arguments that fail these tests because it accords with their instincts are, perhaps knowingly, following the political agenda of someone else.
Sunday, January 03, 2016
Musings on Whether We Consciously Know More or Less than What Is in Our Models…: Larry Summers presents as an example of his contention that we know more than is in our models–that our models are more a filing system, and more a way of efficiently conveying part of what we know, than they are an idea-generating mechanism–Paul Krugman’s Mundell-Fleming lecture, and its contention that floating exchange-rate countries that can borrow in their own currency should not fear capital flight in a utility trap. He points to Olivier Blanchard et al.’s empirical finding that capital outflows do indeed appear to be not expansionary but contractionary ...
[There's quite a bit more in Brad's post.]
Wednesday, December 30, 2015
I asked my colleagues George Evans and Bruce McGough if they would like to respond to a recent post by Simon Wren-Lewis, "Woodford’s reflexive equilibrium" approach to learning:
The neo-Fisherian view and the macro learning approach
George W. Evans and Bruce McGough
Economics Department, University of Oregon
December 30, 2015
Cochrane (2015) argues that low interest rates are deflationary — a view that is sometimes called neo-Fisherian. In this paper John Cochrane argues that raising the interest rate and pegging it at a higher level will raise the inflation rate in accordance with the Fisher equation, and works through the details of this in a New Keynesian model.
Garcia-Schmidt and Woodford (2015) argue that the neo-Fisherian claim is incorrect and that low interest rates are both expansionary and inflationary. In making this argument Mariana Garcia-Schmidt and Michael Woodford use an approach that has a lot of common ground with the macro learning literature, which focuses on how economic agents might come to form expectations, and in particular whether coordination on a particular rational expectations equilibrium (REE) is plausible. This literature examines the stability of an REE under learning and has found that interest-rate pegs of the type discussed by Cochrane lead to REE that are not stable under learning. Garcia-Schmidt and Woodford (2015) obtain an analogous instability result using a new bounded-rationality approach that provides specific predictions for monetary policy. There are novel methodological and policy results in the Garcia-Schmidt and Woodford (2015) paper. However, we will here focus on the common ground with other papers in the learning literature that also argue against the neo-Fisherian claim.
The macro learning literature posits that agents start with boundedly rational expectations e.g. based on possibly non-RE forecasting rules. These expectations are incorporated into a “temporary equilibrium” (TE) environment that yields the model’s endogenous outcomes. The TE environment has two essential components: a decision-theoretic framework which specifies the decisions made by agents (households, firms etc.) given their states (values of exogenous and pre-determined endogenous state variables) and expectations;1 and a market-clearing framework that coordinates the agents’ decisions and determines the values of the model’s endogenous variables. It is useful to observe that, taken together, the two components of the TE environment yield the “TE-map” that takes expectations and (aggregate and idiosyncratic) states to outcomes.
The adaptive learning framework, which is the most popular formulation of learning in macro, proceeds recursively. Agents revise their forecast rules in light of the data realized in the previous period, e.g. by updating their forecast rules econometrically. The exogenous shocks are then realized, expectations are formed, and a new temporary equilibrium results. The equilibrium path under learning is defined recursively. One can then study whether the economy under adaptive learning converges over time to the REE of interest.2
The essential point of the learning literature is that an REE, to be credible, needs an explanation for how economic agents come to coordinate on it. This point is acute in models in which there are multiple RE solutions, as can arise in a wide range of dynamic macro models. This has been an issue in particular in the New Keynesian model, but it also arises, for example, in overlapping generations models and in RBC models with distortions. The macro learning literature provides a theory for how agents might learn over time to forecast rationally, i.e. to come to have RE (rational expectations). The adaptive learning approach found that agents will over time come to have rational expectations (RE) by updating their econometric forecasting models provided the REE satisfies “expectational stability” (E-stability) conditions. If these conditions are not satisfied then convergence to the REE will not occur and hence it is implausible that agents would be able to coordinate on the REE. E-stability then also acts as a selection device in cases in which there are multiple REE.
The adaptive learning approach has the attractive feature that the degree of rationality of the agents is natural: though agents are boundedly rational, they are still fairly sophisticated, estimating and updating their forecasting models using statistical learning schemes. For a wide range of models this gives plausible results. For example, in the basic Muth cobweb model, the REE is learnable if supply and demand have their usual slopes; however, the REE, though still unique, is not learnable if the demand curve is upward sloping and steeper than the supply curve. In an overlapping generations model, Lucas (1986) used an adaptive learning scheme to show that though the overlapping generations model of money has multiple REE, learning dynamics converge to the monetary steady state, not to the autarky solution. Early analytical adaptive learning results were obtained in Bray and Savin (1986) and the formal framework was greatly extended in Marcet and Sargent (1989). The book by Evans and Honkapohja (2001) develops the E-stability principle and includes many applications. Many more applications of adaptive learning have been published over the last fifteen years.
There are other approaches to learning in macro that have a related theoretical motivation, e.g. the “eductive” approach of Guesnerie asks whether mental reasoning by hyper-rational agents, with common knowledge of the structure and of the rationality of other agents, will lead to coordination on an REE. A fair amount is known about the connections between the stability conditions of the alternative adaptive and eductive learning approaches.3 The Garcia-Schmidt and Woodford (2015) “reflective equilibrium” concept provides a new approach that draws on both the adaptive and eductive strands as well as on the “calculation equilibrium” learning model of Evans and Ramey (1992, 1995, 1998). These connections are outlined in Section 2 of Garcia-Schmidt and Woodford (2015).4
The key insight of these various learning approaches is that one cannot simply take RE (which in the nonstochastic case reduces to PF, i.e. perfect foresight) as given. An REE is an equilibrium that begs an explanation for how it can be attained. The various learning approaches rely on a temporary equilibrium framework, outlined above, which goes back to Hicks (1946). A big advantage of the TE framework, when developed at the agent level and aggregated, is that in conjunction with the learning model an explicit causal story can be developed for how the economy evolves over time.
The lack of a TE or learning framework in Cochrane (2011, 2015) is a critical omission. Cochrane (2009) criticized the Taylor principle in NK models as requiring implausible assumptions on what the Fed would do to enforce its desired equilibrium path; however, this view simply reflects the lack of a learning perspective. McCallum (2009) argued that for a monetary rule satisfying the Taylor principle the usual RE solution used by NK modelers is stable under adaptive learning, while the non-fundamental solution bubble solution is not. Cochrane (2009, 2011) claimed that these results hinged on the observability of shocks. In our paper “Observability and Equilibrium Selection,” Evans and McGough (2015b), we develop the theory of adaptive learning when fundamental shocks are unobservable, and then, as a central application, we consider the flexible-price NK model used by Cochrane and McCallum in their debate. We carefully develop this application using an agent-level temporary equilibrium approach and closing the model under adaptive learning. We find that if the Taylor principle is satisfied, then the usual solution is robustly stable under learning, while the non-fundamental price-level bubble solution is not. Adaptive learning thus operates as a selection criterion and it singles out the usual RE solution adopted by proponents of the NK model. Furthermore, when monetary policy does not obey the Taylor principle then neither of the solutions is robustly stable under learning; an interest-rate peg is an extreme form of such a policy, and the adaptive learning perspective cautions that this will lead to instability. We discuss this further below.
The agent-level/adaptive learning approach used in Evans and McGough (2015b) allows us to specifically address several points raised by Cochrane. He is concerned that there is no causal mechanism that pins down prices. The TE map provides this, in the usual way, through market clearing given expectations of future variables. Cochrane also states that the lack of a mechanism means that the NK paradigm requires that the policymakers be interpreted as threatening to “blow up” the economy if the standard solution is not selected by agents.5 This is not the case. As we say in our paper (p. 24-5), “inflation is determined in temporary equilibrium, based on expectations that are revised over time in response to observed data. Threats by the Fed are neither made nor needed ... [agents simply] make forecasts the same way that time-series econometricians typically forecast: by estimating least-squares projections of the variables being forecasted on the relevant observables.”
Let us now return to the issue of interest rate pegs and the impact of changing the level of an interest rate peg. The central adaptive learning result is that interest rate pegs give REE that are unstable under learning. This result was first given in Howitt (1992). A complementary result was given in Evans and Honkapohja (2003) for time-varying interest rate pegs designed to optimally respond to fundamental shocks. As discussed above, Evans and McGough (2015b) show that the instability result also obtains when the fundamental shocks are not observable and the Taylor principle is not satisfied. The economic intuition in the NK model is very strong and is essentially as follows. Suppose we are at an REE (or PFE) at a fixed interest rate and with expected inflation at the level dictated by the Fisher equation. Suppose that there is a small increase in expected inflation. With a fixed nominal interest rate this leads to a lower real interest rate, which increases aggregate demand and output. This in turn leads to higher inflation, which under adaptive learning leads to higher expected inflation, destabilizing the system. (The details of the evolution of expectations and the model dynamics depend, of course, on the precise decision rules and econometric forecasting model used by agents). In an analogous way, expected inflation slightly lower than the REE/PFE level leads to cumulatively lower levels of inflation, output and expected inflation.
Returning to the NK model, additional insight is obtained by considering a nonlinear NK model with a global Taylor rule that leads to two steady states. This model was studied by Benhabib, Schmidt-Grohe and Uribe in a series of papers, e.g. Benhabib, Schmitt-Grohe, and Uribe (2001), which show that with an interest-rate rule following the Taylor principle at the target inflation rate, the zero-lower bound (ZLB) on interest rates implies the existence of an unintended PFE low inflation or deflation steady state (and indeed a continuum of PFE paths to it) at which the Taylor principle does not hold (a special case of which is a local interest rate peg at the ZLB). From a PF/RE viewpoint these are all valid solutions. From the adaptive learning perspective, however, they differ in terms of stability. Evans, Guse, and Honkapohja (2008) and Benhabib, Evans, and Honkapohja (2014) show that the targeted steady state is locally stable under learning with a large basin of attraction, while the unintended low inflation/deflation steady state is not locally stable under learning: small deviations from it lead either back to the targeted steady state or into a deflation trap, in which inflation and output fall over time. From a learning viewpoint this deflation trap should be a major concern for policy.6,7
Finally, let us return to Cochrane (2015). Cochrane points out that at the ZLB peg there has been low but relatively steady (or gently declining) inflation in the US, rather than a serious deflationary spiral. This point echoes Jim Bullard’s concern in Bullard (2010) about the adaptive learning instability result: we effectively have an interest rate peg at the ZLB but we seem to have a fairly stable inflation rate, so does this indicate that the learning literature may here be on the wrong track?
This issue is addressed by Evans, Honkapohja, and Mitra (2015) (EHM2015). They first point out that from a policy viewpoint the major concern at the ZLB has not been low inflation or deflation per se. Instead it is its association with low levels of aggregate output, high levels of unemployment and a more general stagnation. However, the deflation steady state at the ZLB in the NK model has virtually the same level of aggregate output as the targeted steady state. The PFE at the ZLB interest rate peg is not a low level output equilibrium, and if we were in that equilibrium there would not be the concern that policy-makers have shown. (Temporary discount rate or credit market shocks of course can lead to recession at the ZLB but their low output effects vanish as soon as the shocks vanish).
In EHM2015 steady mild deflation is consistent with low output and stagnation at the ZLB.8 They note that many commentators have remarked that the behavior of the NK Phillips relation is different from standard theory at very low output levels. EHM2015 therefore imposes lower bounds on inflation and consumption, which can become relevant when agents become sufficiently pessimistic. If the inflation lower bound is below the unintended low steady state inflation rate, a third “stagnation” steady state is created at the ZLB. The stagnation steady state, like the targeted steady state is locally stable under learning, and arises under learning if output and inflation expectations are too pessimistic. A large temporary fiscal stimulus can dislodge the economy from the stagnation trap, and a smaller stimulus can be sufficient if applied earlier. Raising interest rates does not help in the stagnation state and at an early stage it can push the economy into the stagnation trap.
In summary, the learning approach argues forcefully against the neo- Fisherian view.
1With infinitely-lived agents there are several natural implementations of optimizing decision rules, including short-horizon Euler-equation or shadow-price learning approaches(see, e.g., Evans and Honkapohja (2006) and Evans and McGough (2015a)) and the anticipated utility or infinte-horizon approaches of Preston (2005) and Eusepi and Preston (2010).
2An additional advantage of using learning is that learning dynamics give expanded scope for fitting the data as well as explaining experimental findings.
3The TE map is the basis for the map at the core of any specified learning scheme, which in turn determines the associated stability conditions.
4There are also connections to both the infinite-horizon learning approach to anticipated policy developed in Evans, Honkapohja, and Mitra (2009) and the eductive stability framework in Evans, Guesnerie, and McGough (2015).
5This point is repeated in Section 6.4 of Cochrane (2015): “The main point: such models presume that the Fed induces instability in an otherwise stable economy, a non-credible off-equilibrium threat to hyperinflate the economy for all but one chosen equilibrium.”
6And the risk of sinking into deflation clearly has been a major concern for policymakers in the US, during and following both the 2001 recession and the 2007 - 2009 recession. It has remained a concern in Europe and Japan as well as in Japan during the 1990s.
7Experimnetal work with stylized NK economies has found that entering deflation traps is a real possibility. See Hommes and Salle (2015).
8See also Evans (2013) for a partial and less general version of this argument.
Benhabib, J., G. W. Evans, and S. Honkapohja (2014): “Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?,” Journal of Economic Dynamics and Control, 45, 220—238.
Benhabib, J., S. Schmitt-Grohe, and M. Uribe (2001): “The Perils of Taylor Rules,” Journal of Economic Theory, 96, 40—69.
Bray, M., and N. Savin (1986): “Rational Expectations Equilibria, Learning, and Model Specification,” Econometrica, 54, 1129—1160.
Bullard, J. (2010): “Seven Faces of The Peril,” Federal Reserve Bank of St. Louis Review, 92, 339—352.
Cochrane, J. H. (2009): “Can Learnability Save New Keynesian Models?,” Journal of Monetary Economics, 56, 1109—1113.
_______ (2015): “Do Higher Interest Rates Raise or Lower Inflation?, "Working paper, University of Chicago Booth School of Business.
Dixon, H., and N. Rankin (eds.) (1995): The New Macroeconomics: Imperfect Markets and Policy Effectiveness. Cambridge University Press, Cambridge UK.
Eusepi, S., and B. Preston (2010): “Central Bank Communication and Expectations Stabilization,” American Economic Journal: Macroeconomics, 2, 235—271.
Evans, G.W. (2013): “The Stagnation Regime of the New KeynesianModel and Recent US Policy,” in Sargent and Vilmunen (2013), chap. 4.
Evans, G. W., R. Guesnerie, and B. McGough (2015): “Eductive Stability in Real Business Cycle Models,” mimeo.
Evans, G. W., E. Guse, and S. Honkapohja (2008): “Liquidity Traps, Learning and Stagnation,” European Economic Review, 52, 1438—1463.
Evans, G. W., and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.
_______ (2003): “Expectations and the Stability Problem for Optimal Monetary Policies,” Review of Economic Studies, 70, 807—824.
_______ (2006): “Monetary Policy, Expectations and Commitment,” Scandinavian Journal of Economics, 108, 15—38.
Evans, G. W., S. Honkapohja, and K. Mitra (2009): “Anticipated Fiscal Policy and Learning,” Journal of Monetary Economics, 56, 930— 953
_______ (2015): “Expectations, Stagnation and Fiscal Policy,” Working paper, University of Oregon.
Evans, G. W., and B. McGough (2015a): “Learning to Optimize,” mimeo, University of Oregon.
_______ (2015b): “Observability and Equilibrium Selection,” mimeo, University of Oregon.
Evans, G. W., and G. Ramey (1992): “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review, 82, 207—224.
_______ (1995): “Expectation Calculation, Hyperinflation and Currency Collapse,” in Dixon and Rankin (1995), chap. 15, pp. 307—336.
_______ (1998): “Calculation, Adaptation and Rational Expectations,” Macroeconomic Dynamics, 2, 156—182.
Garcia-Schmidt, M., and M. Woodford (2015): “Are Low Interest Rates Deflationary? A Paradox of Perfect Foresight Analysis,” Working paper, Columbia University.
Hicks, J. R. (1946): Value and Capital, Second edition. Oxford University Press, Oxford UK.
Hommes, Cars H., M. D., and I. Salle (2015): “Monetary and Fiscal Policy Design at the Zero Lower Bound: Evidence from the lab,” mimeo., CeNDEF, University of Amsterdam.
Howitt, P. (1992): “Interest Rate Control and Nonconvergence to Rational Expectations,” Journal of Political Economy, 100, 776—800.
Lucas, Jr., R. E. (1986): “Adaptive Behavior and Economic Theory,” Journal of Business, Supplement, 59, S401—S426.
Marcet, A., and T. J. Sargent (1989): “Convergence of Least-Squares Learning Mechanisms in Self-Referential Linear Stochastic Models,” Journal of Economic Theory, 48, 337—368.
McCallum, B. T. (2009): “Inflation Determination with Taylor Rules: Is New-Keynesian Analysis Critically Flawed?,” Journal of Monetary Economic Dynamics, 56, 1101—1108.
Preston, B. (2005): “Learning about Monetary Policy Rules when Long- Horizon Expectations Matter,” International Journal of Central Banking, 1, 81—126.
Sargent, T. J., and J. Vilmunen (eds.) (2013): Macroeconomics at the Service of Public Policy. Oxford University Press.
Sunday, December 20, 2015
I've never paid much attention to the fiscal theory of the price level:
The FTPL version of the Neo-Fisherian proposition: The Neo-Fisherian doctrine is the idea that a permanent increase in a flat nominal interest rate path will (eventually) raise the inflation rate. It is then suggested that current below target inflation is a consequence of fixing rates at their lower bound, and rates should be raised to increase inflation. David Andolfatto says there are two versions of this doctrine. The first he associates with the work of Stephanie Schmitt-Grohe and Martin Uribe, which I discussed here. He like me is not sold on this interpretation, for I think much the same reason. ... But he favours a different interpretation, based on the Fiscal Theory of the Price Level (FTPL).
Let me first briefly outline my own interpretation of the FTPL. This looks at the possibility of a fiscal regime where there is no attempt to stabilize debt. Government spending and taxes are set independently of the level or sustainability of government debt. The conventional and quite natural response to the possibility of that regime is to say it is unstable. But there is another possibility, which is that monetary policy stabilizes debt. Again a natural response would be to say that such a monetary policy regime is bound to be inconsistent with hitting an inflation target in the long run, but that is incorrect. ...
A constant nominal interest rate policy is normally thought to be indeterminate because the price level is not pinned down, even though the expected level of inflation is. In the FTPL, the price level is pinned down by the need for the government budget to balance at arbitrary and constant levels for taxes and spending. ...
I have a ... serious problem with this FTPL interpretation in the current environment. The belief that people would need to have for the FTPL to be relevant - that the government would not react to higher deficits by reducing government spending or raising taxes - does not seem to be credible, given that austerity is all about them doing exactly this despite being in a recession. As a result, I still find the Neo-Fisherian proposition, with either interpretation, somewhat unrealistic.
Thursday, December 17, 2015
Are prices sticky?:
“Sticky” sales, by Phil Davies,The Region, FRG Minneapolis: Sales are ubiquitous in the U.S. economy. Black Friday, President’s Day, Mother’s Day, the Fourth of July; almost any occasion is cause for price cutting, accompanied by prominent signage, balloons and ads in traditional and social media to make the savings known far and wide. Retailers also put on sales ostensibly to clear out inventory, celebrate being on the sidewalk and go out of business.
Economists are interested in sales, not because they want cheap stuff (well, maybe they’re as partial to a deal as anyone), but because the role of sales has a bearing on a question central to macroeconomics: How flexible are prices? Price flexibility—how quickly prices adjust to changes in costs or demand—is crucial to understanding how shocks of any kind, including fiscal and monetary policy, affect economic performance.
Retail prices rise and fall frequently as merchants put items on sale and then restore the regular, or shelf, price. Indeed, the bulk of weekly and monthly variance in individual prices is due to sales promotions, not changes in regular prices. But there’s a lively debate in economics about the true flexibility of sale prices, from a macro perspective; for all their seeming fluidity, how readily do sales respond to changes in underlying costs and unexpected events that alter economic conditions?
How sale prices respond to wholesale cost shocks and broader macroeconomic shocks such as an increase in government spending or monetary policy stimulus, or a decrease in global aggregate demand, affects the flexibility of aggregate retail prices, with profound implications for monetary policy and the accuracy of macroeconomic models that guide policymaking.
Monetary policy as a tool for influencing the economy depends on sticky prices—the idea that prices don’t adjust instantly to shifts in demand caused by changes in money supply. If they did, an increase in demand for goods and services due to monetary easing would trigger an immediate price rise, suppressing demand and leaving economic output and employment unchanged. Thus, the stickier are prices, the more effective is monetary policy in modulating economic growth in the short and medium run. (Economists generally agree that money is neutral in the long run; that is, over a long enough period of time, prices are actually quite flexible, so monetary policy has no long-run effect on the real economy.)
Recent work by Ben Malin, a senior research economist at the Minneapolis Fed, provides insight into the import of temporary sales for price stickiness and thus monetary policy. In “Informational Rigidities and the Stickiness of Temporary Sales” (Minneapolis Fed Staff Report 513), Malin uses a rich data set of prices from a U.S. retail chain to investigate how retail prices adjust in response to wholesale price increases and other economic shocks. Joining Malin in the research are economists Emi Nakamura and Jón Steinsson of Columbia University, and marketing professors Eric Anderson and Duncan Simester of Northwestern University and MIT, respectively.
Surprisingly, the authors find no change in the frequency and depth of price cuts in response to shocks. Their analysis, which also taps micro price data underlying the consumer price index to look at how sales at a representative sample of U.S. retailers respond to booms and downturns, shows that merchants rely exclusively on regular prices to adapt to cost changes and evolving economic conditions. The research “supports the view that the behavior of regular prices is what matters for aggregate price flexibility,” Malin said in interview. ...
Wednesday, December 16, 2015
Must-Read: Kevin Hoover: The Methodology of Empirical Macroeconomics: The combination of representative-agent modeling and utility-based “microfoundations” was always a game of intellectual Three-Card Monte. Why do you ask? Why don’t we fund sociologists to investigate for what reasons–other than being almost guaranteed to produce conclusions ideologically-pleasing to some–it has flourished for a generation in spite of having no empirical support and no theoretical coherence?
Kevin Hoover: The Methodology of Empirical Macroeconomics: “Given what we know about representative-agent models…
…there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. ...
Tuesday, December 01, 2015
The centrality of policy to how long recessions last: Paul Krugman reminds us that one of the most misguided questions in macroeconomics is ‘are business cycles self-correcting’. ... That answer ... only holds for a particular set of monetary policy rules (plus assumptions about fiscal policy).
It is very easy to see this. Suppose monetary policy is so astute that it knows perfectly all the shocks that hit the economy, and how interest rates influence that economy. In that case absent the Zero Lower Bound the business cycle would disappear, whatever the speed of price adjustment. Or... As Nick Rowe points out, if you had a really bad monetary policy recessions could last forever.
A better answer to both questions (self-correction and how long business cycles last) is it all depends on monetary policy. Actually even that answer makes an implicit assumption, which is that there is no fiscal (de)stabilisation. The correct answer to both questions is that it depends first and foremost on policy. The speed of price adjustment only becomes central for particular policy rules.
So why do many economists (including occasionally some macroeconomists) get this wrong? ... It could be just an unfortunate accident. We are so used to teaching about fixed money supply rules (or in my case Taylor rules), that we can take those rules for granted. But there is also a more interesting answer. To some economists with a particular point of view, the idea that getting policy right might be essential to whether the economy self-corrects from shocks is troubling. ...
Focusing on this logic alone can lead to big mistakes. I have heard a number of times good economists say that in 2015 we can no longer be in a demand deficient recession, because price adjustment cannot be that slow. This mistake happens because they take good policy for granted..., with sub-optimal policy the length of recessions has much more to do with that bad policy than it has to do with the speed of price adjustment.
Just how misleading a focus on the speed of price adjustment can be becomes evident at the Zero Lower Bound. With nominal interest rates stuck at zero, rapid price adjustment will make the recession worse, not better. Price rigidity may be a condition for the existence of business cycles, but it can have very little to do with their duration.
And I as noted in my last column, the evidence is mounting that that poor policy can do more than slow a recovery, it can also permanently reduce our productive capacity.
Saturday, November 28, 2015
Paul Krugman on macroeconomic models:
Demand, Supply, and Macroeconomic Models: I’m supposed to do a presentation next week about “shifts in economic models,” which has me trying to systematize my thought about what the crisis and aftermath have and haven’t changed my understanding of macroeconomics. And it seems to me that there is an important theme here: it’s the supply side, stupid. ...
Friday, November 20, 2015
Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.
The full text explains these points in more detail (I left out one point on the measurement of productivity).
Thursday, November 05, 2015
[Running very late today, so three quick posts to get something up besides links -- I probably chose this one because my name was mentioned. See the sidebar for more new links.]
Public investment: has George started listening to economists?: I have in the past wondered just how large the majority among academic economists would be for additional public investment right now. The economic case for investing when the cost of borrowing is so cheap (particularly when the government can issue 30 year fixed interest debt) is overwhelming. I had guessed the majority would be pretty large just by personal observation. Economists who are not known for their anti-austerity views, like Ken Rogoff, tend to support additional public investment.
Thanks to a piece by Mark Thoma I now have some evidence. His article is actually about ideological bias in economics, and is well worth reading on that account, but it uses results from the ChicagoBooth survey of leading US economists. I have used this survey’s results on the impact of fiscal policy before, but they have asked a similar question about public investment. It is
“Because the US has underspent on new projects, maintenance, or both, the federal government has an opportunity to increase average incomes by spending more on roads, railways, bridges and airports.”
Not one of the nearly 50 economists surveyed disagreed with this statement. What was interesting was that the economists were under no illusions that the political process in the US would be such that some bad projects would be undertaken as a result (see the follow-up question). Despite this, they still thought increasing investment would raise incomes.
The case for additional public investment is as strong in the UK (and Germany) as it is in the US. Yet since 2010 it appeared the government thought otherwise. ...
However since the election George Osborne seems to have had a change of heart. ...
Business cycle theory vs growth theory: Macroeconomics is divided into (short run) business cycle theory and (long run) growth theory.
Those of us who do business cycle theory have a bit of an inferiority complex (though you might not know it from listening to us argue). Because growth theory seems to be so much more important. Where would you rather live: in a rich country during a recession; or in a poor country during a boom? (Watch the flows of people voting or attempting to vote with their feet if you are not sure how most people would answer.) In the long run, productivity is about the only thing that matters.
We would feel better about ourselves, and what we are studying and teaching, if we could argue that taming the business cycle would improve long run growth.
Notice that I have deliberately personalised this question to make you aware of my personal bias. Macroeconomists like me, who do short run business cycle theory, want to think that what we are doing is important. We want to argue that taming the business cycle would improve long run growth.
(The Great Recession was great for my sort of macro; we haven't had so much fun since the 1970's. The Great Moderation was a boring time for macroeconomists like me, when we seemed to be victims of our own success; all the growth theorists were stealing our limelight.)
Why might business cycles lower the long run growth rate? ...
Tuesday, November 03, 2015
Advanced economies are so sick we need a new way to think about them: ...Hysteresis Effects Blanchard Cerutti and I look at a sample of over 100 recessions from industrial countries over the last 50 years and examine their impact on long run output levels in an effort to understand what Blanchard and I had earlier called hysteresis effects. We find that in the vast majority of cases output never returns to previous trends. Indeed there appear to be more cases where recessions reduce the subsequent growth of output than where output returns to trend. In other words “super hysteresis” to use Larry Ball’s term is more frequent than “no hysteresis.” ...
In subsequent work Antonio Fatas and I have looked at the impact of fiscal policy surprises on long run output and long run output forecasts using a methodology pioneered by Blanchard and Leigh. ... We find that fiscal policy changes have large continuing effects on levels of output suggesting the importance of hysteresis. ...
Towards a New Macroeconomics My separate comments in the volume develop an idea I have pushed with little success for a long time. Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent this is true of the DSGE (dynamic stochastic general equilibrium) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations not on the level of output. This assumption is problematic...
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics.
Friday, October 30, 2015
The missing lowflation revolution: It will soon be eight years since the US Federal Reserve decided to bring its interest rate down to 0%. Other central banks have spent similar number of years (or much longer in the case of Japan) stuck at the zero lower bound. In these eight years central banks have used all their available tools to increase inflation closer to their target and boost growth with limited success. GDP growth has been weak or anemic, and there is very little hope that economies will ever go back to their pre-crisis trends.
Some of these trends have challenged the traditional view of academic economists and policy makers about how an economy works. ...
My own sense is that the view among academics and policy makers is not changing fast enough and some are just assuming that this would be a one-time event that will not be repeated in the future (even if we are still not out of the current event!).
The comparison with the 70s when stagflation produced a large change in the way academic and policy makers thought about their models and about the framework for monetary policy is striking. During those year a high inflation and low growth environment created a revolution among academics (moving away from the simple Phillips Curve) and policy makers (switching to anti-inflationary and independent central banks). How many more years of zero interest rate will it take to witness a similar change in our economic analysis?
Saturday, September 26, 2015
From an interview with Olivier Blanchard:
...IMF Survey: In pushing the envelope, you also hosted three major Rethinking Macroeconomics conferences. What were the key insights and what are the key concerns on the macroeconomic front?
Blanchard: Let me start with the obvious answer: That mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.
But let me give you a few less obvious answers:
The financial crisis raises a potentially existential crisis for macroeconomics. Practical macro is based on the assumption that there are fairly stable aggregate relations, so we do not need to keep track of each individual, firm, or financial institution—that we do not need to understand the details of the micro plumbing. We have learned that the plumbing, especially the financial plumbing, matters: the same aggregates can hide serious macro problems. How do we do macro then?
As a result of the crisis, a hundred intellectual flowers are blooming. Some are very old flowers: Hyman Minsky’s financial instability hypothesis. Kaldorian models of growth and inequality. Some propositions that would have been considered anathema in the past are being proposed by "serious" economists: For example, monetary financing of the fiscal deficit. Some fundamental assumptions are being challenged, for example the clean separation between cycles and trends: Hysteresis is making a comeback. Some of the econometric tools, based on a vision of the world as being stationary around a trend, are being challenged. This is all for the best.
Finally, there is a clear swing of the pendulum away from markets towards government intervention, be it macro prudential tools, capital controls, etc. Most macroeconomists are now solidly in a second best world. But this shift is happening with a twist—that is, with much skepticism about the efficiency of government intervention. ...
Paul Krugman returns to a familiar theme:
Economics: What Went Right: ...I’m at EconEd; here are my slides for later today. The theme of my talk is something I’ve emphasized a lot over the past few years: basic macroeconomics has actually worked remarkably well in the post-crisis world, with those of us who took our Hicks seriously calling the big stuff — the effects of monetary and fiscal policy — right, and those who went with their gut getting it all wrong. ...
One thing I do try is to concede that one piece of the conventional story hasn’t worked that well, namely the Phillips curve, where the “clockwise spirals” of previous protracted large output gaps haven’t materialized. Maybe it’s about what happens at very low inflation rates.
What’s notable about the Fed’s urge to raise rates, however, is that Fed officials, including Janet Yellen, are acting as if they have high confidence in their models of inflation dynamics –which is the one thing we really haven’t done well at recently. I really fear that we’re looking at incestuous amplification here.
Agree about the uncertainty about inflation dynamics, but fear Fed officials will interpret it as risks on the upside that must be nullified through interest rate hikes. As for the Phillips curve, here's a graph from his talk:
As Krugman says, "Maybe it’s about what happens at very low inflation rates." I would add that the combination of the zero bound, low inflation, and downward wage rigidity may be able to explain the change in the Phillips curve -- I'm not quite ready to give up yet.
More generally, estimating inflation dynamics has been far from successful. For example, in many VAR models (a widely used empirical specification for establishing relationships among macroeconomic series), a shock to the federal funds rate often causes prices to go up (theory says they should go down). This can be overcome somewhat by including commodity prices in the model. The idea is that when the Fed expects inflation to go up it raises the federal funds rate, and since the policy does not complete eliminate the inflation, the data will show a positive correlation between the federal funds rate and inflation. Commodity prices are thought to embody and be sensitive to future expected inflation, so including this variable helps to solve the "price puzzle" as it is known. Even so, the results are highly sensitive to specification, and when you work with these models regularly you come away believing that the estimated price dynamics are not very good at all.
But the Fed must forecast in order to do policy. There are lags (though I've argued they are likely shorter than common wisdom suggests), and the Fed must act before a clear picture emerges. The question is how the Fed should react to such uncertainty about its inflation forecasts, and to me -- given the corresponding uncertainties about the state of the labor market and the asymmetric nature of the costs of mistakes about inflation and unemployment (plus the distributional issues -- who gets hurt by each mistake?), it counsels patience rather than urgency on the inflation front.