Category Archive for: Academic Papers [Return to Main]

Friday, December 21, 2012

'A Pitfall with DSGE-Based, Estimated, Government Spending Multipliers'

This paper, which I obviously think is worth noting, is forthcoming in AEJ Macroeconomics:

A Pitfall with DSGE-Based, Estimated, Government Spending Multipliers, by Patrick Fève,  Julien Matheron, Jean-Guillaume Sahuc, December 5, 2012: 1 Introduction Standard practice in estimation of dynamic stochastic general equilibrium (DSGE) models, e.g. the well-known work by Smets and Wouters (2007), is to assume that government consumption expenditures are described by an exogenous stochastic process and are separable in the households’ period utility function. This standard practice has been adopted in the most recent analyses of fiscal policy (e.g. Christiano, Eichenbaum and Rebelo, 2011, Coenen et al., 2012, Cogan et al., 2010, Drautzburg and Uhlig, 2011, Eggertsson, 2011, Erceg and Lindé, 2010, Fernández-Villaverde, 2010, Uhlig, 2010).
In this paper, we argue that both short-run and long-run government spending multipliers (GSM) obtained in this literature may be downward biased. This is so because the standard approach does not typically allow for the possibility that private consumption and government spending are Edgeworth complements in the utility function[1] and that government spending has an endogenous countercyclical component (automatic stabilizer)... Since, as we show, the GSM increases with the degree of Edgeworth complementarity,... the standard empirical approach may ... result in a downward-biased estimate of the GSM.
In our benchmark empirical specification with Edgeworth complementarity and a countercyclical component of policy, the estimated long-run multiplier amounts to 1.31. Using the same model..., when both Edgeworth complementarity and the countercyclical component of policy are omitted,... the estimated multiplier is approximately equal to 0.5. Such a difference is clearly not neutral if the model is used to assess recovery plans of the same size as those recently enacted in the US. To illustrate this more concretely, we feed the American Recovery and Reinvestment Act (ARRA) fiscal stimulus package into our model. We obtain that omitting the endogenous policy rule at the estimation stage would lead an analyst to underestimate the short-run GSM by slightly more than 0.25 points. Clearly, these are not negligible figures. ...
_____
1 We say that private consumption and government spending are Edgeworth complements/substitutes when an increase in government spending raises/diminishes the marginal utility of private consumption. Such a specification has now become standard, following the seminal work by Aschauer (1985), Bailey (1971), Barro (1981), Braun (1994), Christiano and Eichenbaum (1992), Finn (1998), McGrattan (1994).

Let me also add these qualifications from the conclusion:

In our framework, we have deliberately abstracted from relevant details... However, the recent literature insists on other modeling issues that might potentially affect our results. We mention two of them. First, as put forth by Leeper, Plante and Traum (2010), a more general specification of government spending rule, lump-sum transfers, and distortionary taxation is needed to properly fit US data. This richer specification includes in addition to the automatic stabilizer component, a response to government debt and co-movement between tax rates. An important quantitative issue may be to assess which type of stabilization (automatic stabilization and/or debt stabilization) interacts with the estimated degree of Edgeworth complementarity. Second, Fiorito and Kollintzas (2004) have suggested that the degree of complementarity/substitutability between government and private consumptions is not homogeneous over types of public expenditures. This suggests to disaggregate government spending and inspect how feedback rules affect the estimated degree of Edgeworth complementarity in this more general setup. These issues will constitute the object of future researches.

Tuesday, December 11, 2012

'What Does the New CRA Paper Tell Us?'

Mike Konczal:

What Does the New Community Reinvestment Act (CRA) Paper Tell Us?, by Mike Konczal: There are two major, critical questions that show up in the literature surrounding the 1977 Community Reinvestment Act (CRA).
The first question is how much compliance with the CRA changes the portfolio of lending institutions. Do they lend more often and to riskier people, or do they lend the same but put more effort into finding candidates? The second question is how much did the CRA lead to the expansion of subprime lending during the housing bubble. Did the CRA have a significant role in the financial crisis?   There's a new paper on the CRA, Did the Community Reinvestment Act (CRA) Lead to Risky Lending?, by Agarwal, Benmelech, Bergman and Seru, h/t Tyler Cowen, with smart commentary already from Noah Smith. (This blog post will use the ungated October 2012 paper for quotes and analysis.) This is already being used as the basis for an "I told you so!" by the conservative press, which has tried to argue that the second question is most relevant. However, it is important to understand that this paper answers the first question, while, if anything, providing evidence against the conservative case for the second. ...
"the very small share of all higher-priced loan originations that can reasonably be attributed to the CRA makes it hard to imagine how this law could have contributed in any meaningful way to the current subprime crisis." ...

Monday, November 05, 2012

'Managing a Liquidity Trap: Monetary and Fiscal Policy'

I like Stephen Williamson a lot better when he puts on his academic cap. I learned something from this:

Managing a Liquidity Trap: Monetary and Fiscal Policy

I disagree with him about the value of forward guidance, though I wouldn't bet the recovery on this one mechanism, but it's a nice discussion of the underlying issues.

I was surprised to see this reference to fiscal policy:

I've come to think of the standard New Keynesian framework as a model of fiscal policy. The basic sticky price (or sticky wage) inefficiency comes from relative price distortions. Particularly given the zero lower bound on the nominal interest rate, monetary policy is the wrong vehicle for addressing the problem. Indeed, in Werning's model we can always get an efficient allocation with appropriately-set consumption taxes (see Correia et al., for example). I don't think the New Keynesians have captured what monetary policy is about.

For some reason, I thought he was adamantly opposed to fiscal policy interventions. But I think I'm missing something here -- perhaps he is discussing what this particular model says, or what NK models say more generally, rather than what he believes and endorses. After all, he's not a fan of the NK framework. In any case, in addition to whatever help monetary policy can provide, as just noted in the previous post I agree that fiscal policy has an important role to play in helping the economy recover.

Maurizio Bovi: Are You a Good Econometrician? No, I am British (With a Response from George Evans)

Via email, Maurizio Bovi describes a paper of his on adaptive learning (M. Bovi (2012). "Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?" Journal of Economic Dynamics and Control). A colleague of mine, George Evans -- a leader in this area -- responds:

Are you a good econometrician? No, I am British!, by Maurizio Bovi*: A typical assumption of mainstream strands of research is that agents’ expectations are grounded in efficient econometric models. Muthian agents are all equally rational and know the true model. The adaptive learning literature assumes that agents are boundedly rational in the sense that they are as smart as econometricians and that they are able to learn the correct model. The predictor choice approach argues that individuals are boundedly rational in the sense that agents switch to the forecasting rule that has the highest fitness. Preferences could generate enduring inertia in the dynamic switching process and a stationary environment for a sufficiently long period is necessary to learn the correct model. Having said this, all the cited approaches typically argue that there is a general tendency to forecast via optimal forecasting models because of the costs stemming from inefficient predictions.
To the extent that the representative agent’s beliefs i) are based on efficient (in terms of minimum MSE=mean squared forecasting errors) econometric models, and ii) can be captured by ad hoc surveys, two basic facts emerge, stimulating my curiosity. First, in economic systems where the same simple model turns out to be the best predictor for a sufficient span of time survey expectations should tend to converge: more and more individuals should learn or select it. Second, the forecasting fitness of this enduring minimum MSE econometric model should not be further enhanced by the use of information provided by survey expectations. If agents act as if they were statisticians in the sense that they use efficient forecasting rules, then survey-based beliefs must reflect this and cannot contain any statistically significant information that helps reduce the MSE relative to the best econometric predictor. In sum, there could be some value in analyzing hard data  and survey beliefs to understand i) whether these latter derive from optimal econometric models and ii) the time connections between survey-declared and efficient model-grounded expectations. By examining real-time GDP dynamics in the UK I have found that, over a time-span of two decades, the adaptive expectations (AE) model systematically outperforms other standard predictors which, as argued by the above recalled literature, should be in the tool-box of representative econometricians (Random Walk, ARIMA, VAR). As mentioned, this peculiar environment should eventually lead to increased homogeneity in best-model based expectations. However data collected in the surveys managed by the Business Surveys Unit of the European Commission (European Commission, 2007) highlight that great variety in expectations persists. Figure 1 shows that in the UK the number of optimists and pessimists tend to be rather similar at least since the inception of data1 availability (1985).

Bovi

In addition, evidence points to one-way information flows going from survey data to econometric models. In particular, Granger-causality, variance decomposition and Geweke’s instantaneous feedback tests suggest that the accuracy of the AE forecasting model can be further enhanced by the use of the information provided by the level of disagreement across survey beliefs. That is, as per GDP dynamics in the UK, the expectation feedback system looks like an open loop where possibly non-econometrically based beliefs play a key role with respect to realizations. All this affects the general validity of the widespread assumption that representative agents’ beliefs derive from optimal econometric models.
Results are robust to several methods of quantifications of qualitative survey observations as well as to standard forecasting rules estimated both recursively and via optimal-size rolling windows. They are also in line both with the literature supporting the non-econometrically-based content of the information captured by surveys carried out on laypeople and, interpreting MSE as a measure of volatility, with the stylized fact on the positive correlation between dispersion in beliefs and macroeconomic uncertainty.
All in all, our evidence raises some intriguing questions: Why do representative UK citizens seem to be systematically more boundedly rational than what is usually hypothesized in the adaptive learning literature and the predictor choice approach? What does it persistently hamper them to use the most accurate statistical model? Are there econometric (objective) or psychological (subjective) impediments?
____________________
*Italian National Institute of Statistics (ISTAT), Department of Forecasting and Economic Analysis. The opinions expressed herein are those of the author (E-mail mbovi@istat.it) and do not necessarily reflect the views of ISTAT.
[1] The question is “How do you expect the general economic situation in the country to develop over the next 12 months?” Respondents may reply “it will…: i) get a lot better, ii) get a little better, iii) stay the same, iv) get a little worse, v) get a lot worse, vi) I do not know. See European Commission (1997).
References
European Commission (2007). The Joint Harmonised EU Programme of Business and Consumer Surveys, User Guide, European Commission, Directorate-General for Economic and Financial Affairs, July.
M. Bovi (2012). “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?” Journal of Economic Dynamics and Control DOI: 10.1016/j.jedc.2012.10.005.

Here's the response from George Evans:

Comments on Maurizio Bovi, “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?”, by George Evans, University of Oregon: This is an interesting paper that has a lot of common ground with the adaptive learning literature. The techniques and a number of the arguments will be familiar to those of us who work in adaptive learning. The tenets of the adaptive learning approach can be summarized as follows: (1) Fully “rational expectations” (RE) are implausibly strong and implicitly ignores a coordination issue that arises because economic outcomes are affected by the expectations of firms and households (economic “agents”). (2) A more plausible view is that agents have bounded rationality with a degree of rationality comparable to economists themselves (the “cognitive consistency principle”). For example agents’ expectations might be based on statistical models that are revised and updated over time. On this approach we avoid assuming that agents are smarter than economists, but we also recognize that agents will not go on forever making systematic errors. (3) We should recognize that economic agents, like economists, do not agree on a single forecasting model. The economy is complex. Therefore, agents are likely to use misspecified models and to have heterogeneous expectations.
The focus of the adaptive learning literature has changed over time. The early focus was on whether agents using statistical learning rules would or would not eventually converge to RE, while the main emphasis now is on the ways in which adaptive learning can generate new dynamics, e.g. through discounting of older data and/or switching between forecasting models over time. I use the term “adaptive learning” broadly, to include, for example, the dynamic predictor selection literature.
Bovi’s paper “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models” argues that with respect to GDP growth in the UK the answer to his question is no because 1) there is a single efficient econometric model, which is a version of AE (adaptive expectations), and 2) agents might be expected therefore to have learned to adopt this optimal forecasting model over time. However the degree of heterogeneity of expectations has not fallen over time, and thus agents are failing to learn to use the best forecasting model.
From the adaptive learning perspective, Bovi’s first result is intriguing, and merits further investigation, but his approach will look very familiar to those of us who work in adaptive learning. And the second point will surprise few of us: the extent of heterogeneous expectations is well-known, as is the fact that expectations remain persistently heterogeneous, and there is considerable work within adaptive learning that models this heterogeneity.
More specifically:
1) Bovi’s “efficient” model uses AE with the adaptive expectations parameter gamma updated over time in a way that aims to minimize the squared forecast error. This is in fact a simple adaptive learning model, which was proposed and studied in Evans and Ramey, “Adaptive expectations, underparameterization and the Lucas critique”, Journal of Monetary Economics (2006). We there suggested that agents might want to use AE as an optimal choice for a parsimonious (underparameterized) forecasting rule, showed what would determine the optimal choice of gamma, and provided an adaptive learning algorithm that would allow agents to update their choice of gamma over time in order to track unknown structural change. (Our adaptive learning rule exploits the fact that AE can be viewed as the forecast that arises from an IMA(1,1) time-series model, and in our rule the MA parameter is estimated and updated recursively using a constant gain rule.)
2) At the same time I am suspicious that economists will agree that there is a single best way to forecast GDP growth. For the US there is a lot of work by numerous researchers that strongly indicates that choosing between univariate time-series models is controversial, i.e. there appears to be no single clearly best univariate forecasting model, and (ii) forecasting models for GDP growth should be multivariate and should include both current & lagged unemployment rates and the consumption to GDP ratio. Other forecasters have found a role for nonlinear (Markov-switching) dynamics. Thus I doubt that there will be agreement by economists on a single best forecasting model for GDP growth or other key macro variables. Hence we should expect households and firms also to entertain multiple forecasting models, and for different agents to use different models.
3) Even if there were a single forecasting model that clearly dominated, one would not expect homogeneity of expectations across agents or for heterogeneity to disappear over time. In Evans and Honkapohja, “Learning as a Rational Foundation for Macroeconomics and Finance”, forthcoming 2013 in R Frydman and E Phelps, Rethinking Expectations: The Way Forward for Macroeconomics, we point out that variations across agents in the extent of discounting and the frequency with which agents update parameter estimates, as well as the inclusion of idiosyncratic exogenous expectation shocks, will give rise to persistent heterogeneity. There are costs to forecasting, and some agents will have larger benefits from more accurate forecasts than other agents. For example, for some agents the forecast method advocated by Bovi will be too costly and an even simpler forecast will be adequate (e.g. a RW forecast that the coming year will be like last year, or a forecast based on mean growth over, say, the last five years).
4) When there are multiple models potentially in play, as there always is, the dynamic predictor selection approach initiated by Brock and Hommes means that because of varying costs of forecast methods, and heterogeneous costs across agents, not all agents will want to use what appears to be the best performing model. We therefore expect heterogeneous expectations at any moment in time. I do not regard this as a violation of the cognitive consistency principle – even economists will find that in some circumstances in their personal decision-making they use more boundedly rational forecast methods than in other situations in which the stakes are high.
In conclusion, here is my two sentence summary for Maurizio Bovi: Your paper will find an interested audience among those of us who work in this area. Welcome to the adaptive learning approach. 
George Evans

Saturday, October 27, 2012

Inequality of Income and Consumption

Via an email from Lane Kenworthy, here's more research contradicting the claim made by Kevin Hassett and Aparna Mathur in the WSJ that consumption inequality has not increased (here's my response summarizing additional work contradicting their claim, a claim that is really an attempt to blunt the call to use taxation to address the growing inequality problem):

Inequality of Income and Consumption: Measuring the Trends in Inequality from 1985-2010 for the Same Individuals, by Jonathan Fisher, David S. Johnson, and Timothy M. Smeeding: I. Introduction: Income and Consumption The 2012 Economic Report of the President stated: “The confluence of rising inequality and low economic mobility over the past three decades poses a real threat to the United States as a land of opportunity.” This view was also repeated in a speech by Council of Economics Advisors Chairman, Alan Krueger (2012). President Obama suggested that inequality was “…the defining issue of our time...” As suggested by Isabel Sawhill (2012), 2011 was the year of inequality.

While there has been an increased interest in inequality, and especially the differences in trends for the top 1 percent vs. the other 99 percent, this increase in inequality is not a new issue. Twenty years ago, Sylvia Nasar (1992) highlighted similar differences in referring to a report by the Congressional Budget Office (CBO) and Paul Krugman introduced the “staircase vs. picket fence” analogy (see Krugman (1992)). He showed that the change in income gains between 1973 and 1993 followed a staircase pattern with income growth rates increasing with income quintiles, a pattern that has been highlighted by many recent studies, including the latest CBO (2011) report. He also showed that the income growth rates were similar for all quintiles from 1947-1973, creating a picket fence pattern across the quintiles.

Recent research shows that income inequality has increased over the past three decades (Burkhauser, et al. (2012), Smeeding and Thompson (2011), CBO (2011), Atkinson, Piketty and Saez (2011)). And most research suggests that this increase is mainly due to the larger increase in income at the very top of the distribution (see CBO (2011) and Saez (2012)). Researchers, however, dispute the extent of the increase. The extent of the increase depends on the resource measure used (income or consumption), the definition of the resource measure (e.g., market income or after-tax income), and the population of interest.

This paper examines the distribution of income and consumption in the US using data that obtains measures of both income and consumption from the same set of individuals and this paper develops a set of inequality measures that show the increase in inequality during the past 25 years using the 1984-2010 Consumer Expenditure (CE) Survey.

The dispute over whether income or consumption should be preferred as a measure of economic well-being is discussed in the National Academy of Sciences (NAS) report on poverty measurement (Citro and Michael (1995), p. 36). The NAS report argues:

Conceptually, an income definition is more appropriate to the view that what matters is a family’s ability to attain a living standard above the poverty level by means of its own resources…. In contrast to an income definition, an expenditure (or consumption) definition is more appropriate to the view that what matters is someone’s actual standard of living, regardless of how it is attained. In practice the availability of high-quality data is often a prime determinant of whether an incomeor expenditure-based family resource definition is used.

We agree with this statement and we would extend it to inequality measurement.[1] In cases where both measures are available, both income and consumption are important indicators for the level of and trend in economic well-being. As argued by Attanasio, Battistin, and Padula (2010) “...the joint consideration of income and consumption can be particularly informative.” Both resource measures provide useful information by themselves and in combination with one another. When measures of inequality and economic well-being show the same levels and trends using both income and consumption, then the conclusions on inequality are clear. When the levels and/or trends are different, the conclusions are less clear, but useful information and an avenue for future research can be provided.

We examine the trend in the distribution of these measures from 1985 to 2010. We show that while the level of and changes in inequality differ for each measure, inequality increases for all measures over this period and, as expected, consumption inequality is lower than income inequality. Differing from other recent research, we find that the trends in income and consumption inequality are similar between 1985 and 2006, and diverge during the first few years of the Great Recession (between 2006 and 2010). For the entire 25 year period we find that consumption inequality increases about two-thirds as much as income inequality. We show that the quality of the CE survey data is sufficient to examine both income and consumption inequality. Nevertheless, given the differences in the trends in inequality, using measures of both income and consumption provides useful information. In addition, we present the level of and trends in inequality of both the maximum and the minimum of income and consumption. The maximum and minimum are useful to adjust for life-cycle effects of income and consumption and for potential measurement error in income or consumption. The trends in the maximum and minimum are also useful when consumption and income alone provide different results concerning the measurement of economic well-being. ...

Friday, October 26, 2012

NBER Economic Fluctuations & Growth Research Meeting

I was supposed to be here today, but a long flight delay made that impossible. (I know better than to route through San Francisco in the fall -- it is often fogged in all morning -- but I took a chance and lost the bet.):

National Bureau of Economic Research
Economics Fluctuations & Growth Research Meeting
Paul Beaudry and John Leahy, Organizers
October 26, 2012 Federal Reserve Bank of New York
10th Floor Benjamin Strong Room
33 Liberty Street New York, NY
Program
Thursday, October 25:
6:30 pm Reception and Dinner Federal Reserve Bank of New York (enter at 44 Maiden Lane) 1st Floor Dining Room
Friday, October 26:
8:30 am Continental Breakfast
9:00 am Chang-Tai Hsieh, University of Chicago and NBER Erik Hurst, University of Chicago and NBER Charles Jones, Stanford University and NBER Peter Klenow, Stanford University and NBER The Allocation of Talent and U.S. Economic Growth Discussant:  Raquel Fernandez, New York University and NBER
10:00 am Break
10:30 am Fatih Guvenen, University of Minnesota and NBER Serdar Ozkan, Federal Reserve Board Jae Song, Social Security Administration The Nature of Countercyclical Income Risk Discussant:  Jonathan Heathcote, Federal Reserve Bank of Minneapolis
11:30 am Loukas Karabarbounis, University of Chicago and NBER Brent Neiman, University of Chicago and NBER Declining Labor Shares and the Global Rise of Corporate Savings Discussant:  Robert Hall, Stanford University and NBER
12:30 pm Lunch
1:30 pm Stephanie Schmitt-Grohe, Columbia University and NBER Martin Uribe, Columbia University and NBER Prudential Policy for Peggers Discussant:  Gianluca Benigno, London School of Economics
2:30 pm Break
3:00 pm Eric Swanson, Federal Reserve Bank of San Francisco John Williams, Federal Reserve Bank of San Francisco Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates Discussant:  James Hamilton, University of California at San Diego and NBER
4:00 pm Alisdair McKay, Boston University Ricardo Reis, Columbia University and NBER The Role of Automatic Stabilizers in the U.S. Business Cycle Discussant:  Yuriy Gorodnichenko, University of California at Berkeley and NBER
5:00 pm Adjourn

Wednesday, October 24, 2012

The Myth that Growing Consumption Inequality is a Myth

Kevin Hassett and Aparna Mathur argue that consumption inequality has not increased along with income inequality. That's not what recent research says, but before getting to that, here's their argument:

Consumption and the Myths of Inequality, by Kevin Hassett and Aparna Mathur, Commentary, WSJ: In multiple campaign speeches over the past week, President Obama has emphasized a theme central to Democratic campaigns across the country this year: inequality. ... To be sure, there are studies of income inequality—most prominently by Thomas Piketty of the Paris School of Economics and Emmanuel Saez of the University of California at Berkeley—that report that the share of income of the wealthiest Americans has grown over the past few decades while the share of income at the bottom has not. The studies have problems. Some omit worker compensation in the form of benefits. And economist Alan Reynolds has noted that changes to U.S. tax rules cause more income to be reported at the top and less at the bottom. But even if the studies are accepted at face value, as a read on the evolution of inequality, they leave out too much.

Let me break in here. Here's what Piketty and Saez say about Reynold's work:

In his December 14 article, “The Top 1% … of What?”, Alan Reynolds casts doubts on the interpretation of our results showing that the share of income going to the top 1% families has doubled from 8% in 1980 to 16% in 2004. In this response, we want to outline why his critiques do not invalidate our findings and contain serious misunderstandings on our academic work. ...

Back to Hassett and Mathur

Another way to look at people's standard of living over time is by their consumption. Consumption is an even more relevant metric of overall welfare than pre-tax cash income, and it will be set by consumers with an eye on their lifetime incomes. Economists, including Dirk Krueger and Fabrizio Perri of the University of Pennsylvania, have begun to explore consumption patterns, which show a different picture than research on income.

Let me break in again and deal with the Krueger and Perri Krueger and Perri (2006) paper, which followed the related work by Slesnick (2001):

Has Consumption Inequality Mirrored Income Inequality?: This paper by Mark Aguiar and Mark Bils finds that "consumption inequality has closely tracked income inequality over the period 1980-2007":

Has Consumption Inequality Mirrored Income Inequality?, by Mark A. Aguiar and Mark Bils, NBER Working Paper No. 16807, February 2011: Abstract We revisit to what extent the increase in income inequality over the last 30 years has been mirrored by consumption inequality. We do so by constructing two alternative measures of consumption expenditure, using data from the Consumer Expenditure Survey (CE). We first use reports of active savings and after tax income to construct the measure of consumption implied by the budget constraint. We find that the consumption inequality implied by savings behavior largely tracks income inequality between 1980 and 2007. Second, we use a demand system to correct for systematic measurement error in the CE's expenditure data. ...This second exercise indicates that consumption inequality has closely tracked income inequality over the period 1980-2007. Both of our measures show a significantly greater increase in consumption inequality than what is obtained from the CE's total household expenditure data directly.

Why is this important? (see also "Is Consumption the Grail for Inequality Skeptics?"):

An influential paper by Krueger and Perri (2006), building on related work by Slesnick (2001), uses the CE to argue that consumption inequality has not kept pace with income inequality.

And these results have been used by some -- e.g. those who fear corrective action such as an increase in the progressivity of taxes -- to argue that the inequality problem is not as large as figures on income inequality alone suggest. But the bottom line of this paper is that:

The ... increase in consumption inequality has been large and of a similar magnitude as the observed change in income inequality.

So they are citing what is now dated work. They either don't know about the more recent work, or simply chose to ignore it because it doesn't say what they need it to say.

Okay, back to Hassett and Mathur once again. They go on to cite their own work -- more on that below. One thing to note, however, is that the recent research in this area says the data they use must be corrected for measurement error or you are likely to find the (erroneous) results they find. As far as I can tell, the data are not corrected:

Our recent study, "A New Measure of Consumption Inequality," found that the consumption gap across income groups has remained remarkably stable over time. ...
While this stability is something to applaud, surely more important are the real gains in consumption by income groups over the past decade. From 2000 to 2010, consumption has climbed 14% for individuals in the bottom fifth of households, 6% for individuals in the middle fifth, and 14.3% for individuals in the top fifth when we account for changes in U.S. population and the size of households. This despite the dire economy at the end of the decade.

Should we trust this research? First of all this is Kevin Hassett. How much do you trust the work once you know that? Second, it's on the WSJ editorial page. How much does that reduce your trust? I'd hope the answer is "quite a bit." Third, big red flags when researchers cherry pick start and/or end dates. Fourth, as already noted, recent research shows that the no growth in consumption inequality result is due to measurement error in the CES data. When the data are corrected, consumption inequality mirrors income inequality. They don't say a word about correcting the data.

Next, we get the "but they have cell phones!" argument:

Yet the access of low-income Americans—those earning less than $20,000 in real 2009 dollars—to devices that are part of the "good life" has increased. The percentage of low-income households with a computer rose... Appliances? The percentage of low-income homes with air-conditioning equipment..., dishwashers..., a washing machine..., a clothes dryer..., [and] microwave ovens... grew... Fully 75.5% of low-income Americans now have a cell phone, and over a quarter of those have access to the Internet through their phones.

Before turning to their conclusion, let me note more new research in this area from a post earlier this year, But They Have TVs and Cell Phones!, emphasizing the measurement error problem:

Consumption Inequality Has Risen About As Fast As Income Inequality, by Matthew Yglesias: Going back a few years one thing you used to hear about America's high and rising level of income inequality is that it wasn't so bad because there wasn't nearly as much inequality of consumption. This story started to fall apart when it turned out that ever-higher levels of private indebtedness were unsustainable (nobody could have predicted...) but Orazio Attanasio, Erik Hurst, and Luigi Pistaferri report in a new NBER working paper "The Evolution of Income, Consumption, and Leisure Inequality in The US, 1980-2010" that the apparently modest increase in consumption inequality is actually a statistical error.
They say that the Consumer Expenditure Survey data from which the old-school finding is drawn is plagued by non-classical measurement error and adopt four different approaches to measuring consumption inequality that shouldn't be hit by the same problem. All four alternatives point in the same direction: "consumption inequality within the U.S. between 1980 and 2010 has increased by nearly the same amount as income inequality."

Here's Hassett and Mathur's ending:

It is true that the growth of the safety net has contributed to massive government deficits—and a larger government that likely undermines economic growth and job creation. It is an open question whether the nation will be able to reshape the net in order to sustain it, but reshape it we must. ...

After arguing (wrongly) that consumption has kept pace with income, they say it's only because of the deficit -- but it's not sustainable. So suck it up middle class America, consumption inequality has increased despite the claims of denialists like Hassett, and if they get their way and reduce the social safety net, it will only get worse.

Hassett and company denied that income inequality was growing for years (notice their attempt to do just that in the first paragraph by citing discredited research from Alan Reynolds), then when the evidence made it absolutely clear they were wrong (surprise!), they switched to consumption inequality. Recent evidence says they're wrong about that too.

Friday, October 12, 2012

'Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero'

[This one is wonkish. It's (I think) one of the more important papers from the St. Louis Fed conference.]

One thing that doesn't get enough attention in DSGE models, at least in my opinion, is the constraints, implicit assumptions, etc. imposed when the theoretical model is log-linearized. This paper by Tony Braun and Yuichiro Waki helps to fill that void by comparing a theoretical true economy to its log-linearized counterpart, and showing that the results of the two models can be quite different when the economy is at the zero bound. For example, multipliers that are greater than two in the log-linearized version are smaller -- usually near one -- in the true model (thus, fiscal policy remains effective, but may need to be more aggressive than the log-linear model would imply). Other results change as well, and there are sign changes in some cases, leading the authors to conclude that "we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps."

Here's part of the introduction and the conclusion to the paper:

Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero, by Tony Braun and Yuichiro Waki: Abstract Does fiscal policy have qualitatively different effects on the economy in a liquidity trap? We analyze a nonlinear stochastic New Keynesian model and compare the true and log-linearized equilibria. Using the log-linearized equilibrium conditions the answer to the above question is yes. However, for the true nonlinear model the answer is no. For a broad range of empirically relevant parameterizations labor falls in response to a tax cut in the log-linearized economy but rises in the true economy. While the government purchase multiplier is above two in the log-linearized economy it is about one in the true economy.
1 Introduction The recent experiences of Japan, the United States, and Europe with zero/near-zero nominal interest rates have raised new questions about the conduct of monetary and fiscal policy in a liquidity trap. A large and growing body of new research has emerged that provides answers using New Keynesian (NK) frameworks that explicitly model the zero bound on the nominal interest rate. One conclusion that has emerged is that fiscal policy has different effects on the economy when the nominal interest rate is zero. Eggertsson (2011) finds that hours worked fall in response to a labor tax cut when the nominal interest rate is zero, a property that is referred to as the “paradox of toil,” and Christiano, Eichenbaum, and Rebelo (2011), Woodford (2011) and Erceg and Lindé (2010) find that the size of the government purchase multiplier is substantially larger than one when the nominal interest rate is zero.
These and other results ( see e.g. Del Negro, Eggertsson, Ferrero, and Kiyotaki (2010), Bodenstein, Erceg, and Guerrieri (2009), Eggertsson and Krugman (2010)) have been derived in setups that respect the nonlinearity in the Taylor rule but loglinearize the remaining equilibrium conditions about a steady state with a stable price level. Log-linearized NK models require large shocks to generate a binding zero lower bound for the nominal interest rate and the shocks must be even larger if these models are to reproduce the measured declines in output and inflation that occurred during the Great Depression or the Great Recession of 2007-2009.[1] Log-linearizations are local solutions that only work within a given radius of the point where the approximation is taken. Outside of this radius these solutions break down (See e.g. Den Haan and Rendahl (2009)). The objective of this paper is to document that such a breakdown can occur when analyzing the zero bound.
We study the properties of a nonlinear stochastic NK model when the nominal interest rate is constrained at its zero lower bound. Our tractable framework allows us to provide a partial analytic characterization of equilibrium and to numerically compute all equilibria when the zero interest state is persistent. There are no approximations needed when computing equilibria and our numerical solutions are accurate up to the precision of the computer. A comparison with the log-linearized equilibrium identifies a severe breakdown of the log-linearized approximate solution. This breakdown occurs when using parameterizations of the model that reproduce the U.S. Great Depression and the U.S. Great Recession.
Conditions for existence and uniqueness of equilibrium based on the log-linearized equilibrium conditions are incorrect and offer little or no guidance for existence and uniqueness of equilibrium in the true economy. The characterization of equilibrium is also incorrect.
These three unpleasant properties of the log-linearized solution have the implication that relying on it to make inferences about the properties of fiscal policy in a liquidity trap can be highly misleading. Empirically relevant parameterization/shock combinations that yield the paradox of toil in the log-linearized economy produce orthodox responses of hours worked in the true economy. The same parameterization/shock combinations that yield large government purchases multipliers in excess of two in the log-linearized economy, produce government purchase multipliers as low as 1.09 in the nonlinear economy. Indeed, we find that the most plausible parameterizations of the nonlinear model have the property that there is no paradox of toil and that the government purchase multiplier is close to one.
We make these points using a stochastic NK model that is similar to specifications considered in Eggertsson (2011) and Woodford (2011). The Taylor rule respects the zero lower bound of the nominal interest rate, and a preference discount factor shock that follows a two state Markov chain produces a state where the interest rate is zero. We assume Rotemberg (1996) price adjustment costs, instead of Calvo price setting. When log-linearized, this assumption is innocuous - the equilibrium conditions for our model are identical to those in Eggertsson (2011) and Woodford (2011), with a suitable choice of the price adjustment cost parameter. Moreover, the nonlinear economy doesn’t have any endogenous state variables, and the equilibrium conditions for hours and inflation can be reduced to two nonlinear equations in these two variables when the zero bound is binding.[2]
These two nonlinear equations are easy to solve and are the nonlinear analogues of what Eggertsson (2011) and Eggertsson and Krugman (2010) refer to as “aggregate demand” (AD) and “aggregate supply” (AS) schedules. This makes it possible for us to identify and relate the sources of the approximation errors associated with using log-linearizations to the shapes and slopes of these curves, and to also provide graphical intuition for the qualitative differences between the log-linear and nonlinear economies.
Our analysis proceeds in the following way. We first provide a complete characterization of the set of time invariant Markov zero bound equilibria in the log-linearized economy. Then we go on to characterize equilibrium of the nonlinear economy. Finally, we compare the two economies and document the nature and source of the breakdowns associated with using log-linearized equilibrium conditions. An important distinction between the nonlinear and log-linearized economy relates to the resource cost of price adjustment. This cost changes endogenously as inflation changes in the nonlinear model and modeling this cost has significant consequences for the model’s properties in the zero bound state. In the nonlinear model a labor tax cut can increase hours worked and decrease inflation when the interest rate is zero. No equilibrium of the log-linearized model has this property. We show that this and other differences in the properties of the two models is precisely due to the fact that the resource cost of price adjustment is absent from the resource constraint of the log-linearized model.[3] ...
...
5 Concluding remarks In this paper we have documented that it can be very misleading to rely on the log-linearized economy to make inferences about existence of an equilibrium, uniqueness of equilibrium or to characterize the local dynamics of equilibrium. We have illustrated that these problems arise in empirically relevant parameterizations of the model that have been chosen to match observations from the Great Depression and Great Recession.
We have also documented the response of the economy to fiscal shocks in calibrated versions of our nonlinear model. We found that the paradox of toil is not a robust property of the nonlinear model and that it is quantitatively small even when it occurs. Similarly, the evidence presented here suggests that the government purchase GDP multiplier is not much above one in our nonlinear economy.
Although we encountered situations where the log-linearized solution worked reasonably well and the model exhibited the paradox of toil and a government purchase multiplier above one, the magnitude of these effects was quantitatively small. This result was also very tenuous. There is no simple characterization of when the log-linearization works well. Breakdowns can occur in regions of the parameter space that are very close to ones where the log-linear solution works. In fact, it is hard to draw any conclusions about when one can safely rely on log-linearized solutions in this setting without also solving the nonlinear model. For these reasons we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps.
This raises a question. How should one proceed with solution and estimation of medium or large scale NK models with multiple shocks and endogenous state variables when considering episodes with zero nominal interest rates? One way forward is proposed in work by Adjemian and Juillard (2010) and Braun and Körber (2011). These papers solve NK models using extended path algorithms.
We conclude by briefly discussing some extensions of our analysis. In this paper we assumed that the discount factor shock followed a time-homogeneous two state Markov chain with no shock being the absorbing state. In our current work we relax this final assumption and consider general Markov switching stochastic equilibria in which there are repeated swings between episodes with a positive interest rate and zero interest rates. We are also interested in understanding the properties of optimal monetary policy in the nonlinear model. Eggertsson and Woodford (2003), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006), Nakov (2008), and Werning (2011) consider optimal monetary policy problems subject to a non-negativity constraint on the nominal interest rate, using implementability conditions derived from log-linearized equilibrium conditions. The results documented here suggest that the properties of an optimal monetary policy could be different if one uses the nonlinear implementability conditions instead.
[1] Eggertsson (2011) requires a 5.47% annualized shock to the preference discount factor in order to account for the large output and inflation declines that occurred in the Great Depression. Coenen, Orphanides, and Wieland (2004) estimate a NK model to U.S. data from 1980-1999 and find that only very large shocks produce a binding zero nominal interest rate.
[2] Under Calvo price setting, in the nonlinear economy a particular moment of the price distribution is an endogenous state variable and it is no longer possible to compute an exact solution to the equilibrium.
[3] This distinction between the log-linearized and nonlinear resource constraint is not specific to our model of adjustment costs but also arises under Calvo price adjustment (see e.g. Braun and Waki (2010)).

Qualitative Easing: How it Works and Why it Matters

From the Fed conference in St. Louis, Roger Farmer makes what I think is a useful distinction between quantitative easing and qualitative easing (the distinction, first made by Buiter in 2008, is useful inependent of his paper; in the paper he argues that it's the composition of the balance sheet, not the size, that matters -- in the model people cannot participate in financial markets that open before they are born leading to incomplete participation -- qualitative easing works by completing markets and having the Fed engage in Pareto improving trades):

Qualitative Easing: How it Works and Why it Matters, by Roger E.A. Farmer: Abstract This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.
1 Introduction Central banks throughout the world have recently engaged in two kinds of unconventional monetary policies: quantitative easing (QE), which is “an increase in the size of the balance sheet of the central bank through an increase it is monetary liabilities”, and qualitative easing (QuaE) which is “a shift in the composition of the assets of the central bank towards less liquid and riskier assets, holding constant the size of the balance sheet.”[1]
I have made the case, in a recent series of books and articles, (Farmer, 2006, 2010a,b,c,d, 2012, 2013), that qualitative easing can stabilize economic activity and that a policy of this kind will increase economic welfare. In this paper I provide an economic model that shows how qualitative easing works and why it matters.
Because qualitative easing is conducted by the central bank, it is often classified as a monetary policy. But because it adds risk to the public balance sheet that is ultimately borne by the taxpayer, QuaE is better thought of as a fiscal or quasi-fiscal policy (Buiter, 2010). This distinction is important because, in order to be effective, QuaE necessarily redistributes resources from one group of agents to another.
The misclassification of QuaE as monetary policy has led to considerable confusion over its effectiveness and a misunderstanding of the channel by which it operates. For example, in an influential piece that was presented at the 2012 Jackson Hole Conference, Woodford (2012) made the claim that QuaE is unlikely to be effective and, to the extent that it does stimulate economic activity, that stimulus must come through the impact of QuaE on the expectations of financial market participants of future Fed policy actions.
The claim that QuaE is ineffective, is based on the assumption that it has no effect on the distribution of resources, either between borrowers and lenders in the current financial markets, or between current market participants and those yet to be born. I will argue here, that that assumption is not a good characterization of the way that QuaE operates, and that QuaE is effective precisely because it alters the distribution of resources by effecting Pareto improving trades that agents are unable to carry out for themselves.
I make the case for qualitative easing by constructing a simple general equilibrium model where agents are rational, expectations are rational and the financial markets are complete. My work differs from most conventional models of financial markets because I make the not unreasonable assumption, that agents cannot participate in financial markets that open before they are born. In this environment, I show that qualitative easing changes asset prices and that a policy where the central bank uses QuaE to stabilize the value of the stock market is Pareto improving and is costless to implement.
My argument builds upon an important theoretical insight due to Cass and Shell (1983), who distinguish between intrinsic uncertainty and extrinsic uncertainty. Intrinsic uncertainty is a random variable that influences the fundamentals of the economy; preferences, technologies and endowments. Extrinsic uncertainty is anything that does not. Cass and Shell refer to extrinsic uncertainty as sunspots.[2]
In this paper, I prove four propositions. First, I show that employment, consumption and the real wage are a function of the amount of outstanding private debt. Second, I prove that the existence of complete insurance markets is insufficient to prevent the existence of equilibria where employment, consumption and the real wage differ in different states, even when all uncertainty is extrinsic. Third, I introduce a central bank and I show that a central bank swap of safe for risky assets will change the relative price of debt and equity. Finally, I prove that a policy of stabilizing the value of the stock market is welfare improving and that it does not involve a cost to the taxpayer in any state of the world.
...

10 Conclusion An asset price stabilization policy is now under discussion as a result of the failure of traditional monetary policy to move the economy out of the current recession. Most of the academic literature sees the purchase of risky assets by the central bank as an alternative form of monetary policy. In this view, if a central bank asset policy works at all, it works by signaling the intent of future policy makers to keep interest rates low for a longer period than would normally be warranted, once the economy begins to recover. In my view, that argument is incorrect.

Central bank asset purchases have little if anything to do with traditional monetary policy. In some models, asset swaps by the central banks are effective because the central bank has the monopoly power to print money. Although that channel may play a secondary role when the interest rate is at the zero lower bound (Farmer, 2013), it is not the primary channel through which qualitative easing affects asset prices. Central bank open market operations in risky assets are effective because government has the ability to complete the financial markets by standing in for agents who are unable to transact before they are born and it is a policy that would be effective, even in a world where money was not needed as a medium of exchange.

I have made the case, in a recent series of books and articles (Farmer, 2006, 2010a,b,c,d, 2012, 2013), that qualitative easing matters. In this paper I have provided an economic model that shows why it matters.

[1] The quote is from Willem Buiter (2008) who proposed this very useful taxonomy in a piece on his ‘Maverecon’ Financial Times blog.
[2] This is quite different from the original usage of the term by Jevons (1878) who developed a theory of the business cycle, driven by fluctuations in agricultural conditions that were ultimately caused by physical sunspot activity.

Thursday, October 11, 2012

'Job Polarization and Jobless Recoveries'

Interesting paper (it's being presented as I type this):

The Trend is the Cycle: Job Polarization and Jobless Recoveries, by Nir Jaimovich and Henry E. Siu, NBER: Abstract Job polarization refers to the recent disappearance of employment in occupations in the middle of the skill distribution. Jobless recoveries refers to the slow rebound in aggregate employment following recent recessions, despite recoveries in aggregate output. We show how these two phenomena are related. First, job polarization is not a gradual process; essentially all of the job loss in middle-skill occupations occurs in economic downturns. Second, jobless recoveries in the aggregate are accounted for by jobless recoveries in the middle-skill occupations that are disappearing.
1 Introduction In the past 30 years, the US labor market has seen the emergence of two new phenomena: "job polarization" and "jobless recoveries." Job polarization refers to the increasing concentration of employment in the highest- and lowest-wage occupations, as job opportunities in middle-skill occupations disappear. Jobless recoveries refer to periods following recessions in which rebounds in aggregate output are accompanied by much slower recoveries in aggregate employment. We argue that these two phenomena are related.
Consider first the phenomenon of job polarization. Acemoglu (1999), Autor et al. (2006), Goos and Manning (2007), and Goos et al. (2009) (among others) document that, since the 1980s, employment is becoming increasingly concentrated at the tails of the occupational skill distribution. This hollowing out of the middle has been linked to the disappearance of jobs focused on "routine" tasks -- those activities that can be performed by following a well-defined set of procedures. Autor et al. (2003) and the subsequent literature demonstrates that job polarization is due to progress in technologies that substitute for labor in routine tasks.1
In this same time period, Gordon and Baily (1993), Groshen and Potter (2003), Bernanke (2003), and Bernanke (2009) (among others) discuss the emergence of jobless recoveries. In the past three recessions, aggregate employment continues to decline for years following the turning point in aggregate income and output. No consensus has yet emerged regarding the source of these jobless recoveries.
In this paper, we demonstrate that the two phenomena are connected to each other. We make two related claims. First, job polarization is not simply a gradual phenomenon: the loss of middle-skill, routine jobs is concentrated in economic downturns. Specifically, 92% of the job loss in these occupations since the mid-1980s occurs within a 12 month window of NBER dated recessions (that have all been characterized by jobless recoveries). In this sense, the job polarization "trend" is a business "cycle" phenomenon. This contrasts to the existing literature, in which job polarization is oftentimes depicted as a gradual phenomenon, though a number of researchers have noted that this process has been accelerated by the Great Recession (see Autor (2010); and Brynjolfsson and McAfee (2011)). Our First point is that routine employment loss happens almost entirely in recessions.
Our second point is that job polarization accounts for jobless recoveries. This argument is based on three facts. First, employment in the routine occupations identified by Autor et al. (2003) and others account for a significant fraction of aggregate employment; averaged over the jobless recovery era, these jobs account for more than 50% of total employment. Second, essentially all of the contraction in aggregate employment during NBER dated recessions can be attributed to recessions in these middle-skill, routine occupations. Third, jobless recoveries are observed only in these disappearing, middle-skill jobs. The high- and low-skill occupations to which employment is polarizing either do not experience contractions, or if they do, rebound soon after the turning point in aggregate output. Hence, jobless recoveries can be traced to the disappearance of routine occupations in recessions. Finally, it is important to note that jobless recoveries were not observed in routine occupations (nor in aggregate employment) prior to the era of job polarization. ...
[1] See also Firpo et al. (2011), Goos et al. (2011), and the references therein regarding the role of outsourcing and offshoring in job polarization

Here are few graphs showing routine versus non-routine employment changes. The point is that in the period of increased job polarization, most of the job losses have occurred during recessions (the first graph is non-routine cognitive, the second is non-routine manual, and the third is routine -- the third graph shows the best, after 1990 job losses are no recovered after the recession ends as they were in earlier years

Slowrecovery1
Slowrecovery2
Slowrecovery3
Notes: Data from the Bureau of Labor Statistics, Current Population Survey. See Appendix A for details. NR COG = non-routine cognitive; NR MAN = non-routine manual; R = routine.

Interestingly, the paper argues the explanation for jobless recoveries is not an education story:

The share of low educated workers in the labor force (i.e., those with high school diplomas or less) has declined in the last three decades, and these workers exhibit greater business cycle sensitivity than those with higher education. It is thus reasonable to conjecture that the terms "routine" and "low education" are interchangeable. In what follows, we show that this is not the case.

And it's not a manufacturing story:

we first demonstrate that job loss in manufacturing accounts for only a fraction of job polarization. Secondly, we show that the jobless recoveries experienced in the past 30 years cannot be explained by jobless recoveries in the manufacturing sector.

So what story is it? The answer is in the conclusion to the paper:

In the last 30 years the US labor market has been characterized by job polarization and jobless recoveries. In this paper we demonstrate how these are related. We first show that the loss of middle-skill, routine jobs is concentrated in economic downturns. In this sense, the job polarization trend is a business cycle phenomenon. Second, we show that job polarization accounts for jobless recoveries. This argument is based on the fact that almost all of the contraction in aggregate employment during recessions can be attributed to job losses in middle-skill, routine occupations (that account for a large fraction of total employment), and that jobless recoveries are observed only in these disappearing routine jobs since job polarization began. We then propose a simple search-and-matching model of the labor market with occupational choice to rationalize these facts. We show how a trend in routine-biased technological change can lead to job polarization that is concentrated in downturns, and recoveries from these recessions that are jobless.

That is, in recessions, the job separation rate isn't much different than in the past. But the job finding rate is much lower. Thus, the story is that recessions generate job separations, and "In the recession, job separations were concentrated among the middle-skill, routine workers. The recovery in aggregate employment then depends on the post-recession job finding rate of these workers now searching," and this finding rate is low (and when jobs are found, the outcome is polarizing).

Have Blog, Will Travel: 37th Annual Federal Reserve Bank of St. Louis Fall Conference

I am here (later) today and tomorrow:

37th Annual Federal Reserve Bank of St. Louis Fall Conference, October 11-12, 2012

All conference events will take place at the Federal Reserve Bank of St. Louis Gateway Conference Center, Sixth Floor

Thursday, October 11, 2012

12:00-12:30 P.M. Light lunch

12:30-12:45 P.M. Introductory remarks by Christopher Waller

Session I

12:45-2:45 P.M. "Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero" Presenter: Tony Braun, FRB Atlanta Coauthors:  Yuichiro Waki, University of Queensland and Lena Koerber, London School of Economics

"The Trend is the Cycle: Job Polarization and Jobless Recoveries" Presenter: Nir Jaimovich, Duke University Coauthor: Henry Siu, University of British Columbia

2:45-3:00 P.M Coffee Break

Session II

3:00-5:00 P.M. "Liquidity, Assets and Business Cycles" Presenter: Shouyong Shi, University of Toronto

"Spatial Equilibrium with Unemployment and Wage Bargaining: Theory and Evidence" Presenter: Paul Beaudry, University of British Columbia Coauthors: David Green, University of British Columbia and Benjamin Sand, York University

5:00-5:15 P.M. Coffee Break

Session III

5:15-6:15 P.M. "On the Social Usefulness of Fractional Reserve Banking" Presenter:  Chris Phelan, University of Minnesota and FRB Minneapolis Coauthor:  V.V. Chari, University of Minnesota and FRB Minneapolis

6:15-7:00 P.M. Reception

7:00 P.M. Dinner with speech by James Bullard

Friday, October 12, 2012

8:30-9:00 A.M. Continental Breakfast

Session I

9:00-11:00 A.M "Crisis and Commitment: Inflation Credibility and the Vulnerability to Sovereign Debt Crises" Presenter: Mark Aguiar Princeton University Coauthors: Manuel Amador, Stanford, Gita Gopinath, Harvard, and Emmanuel Farhi, Harvard

"The Market for OTC Credit Derivative" Presenter:  Pierre-Olivier Weill, UCLA Coauthors: Andy Atkeson, UCLA and Andrea Eisfeldt, UCLA

11:00-11:15 A.M Coffee Break

Session II

11:15-12:15 P.M. "Overborrowing, Financial Crises and 'Macro-prudential' Policy" Presenter: Enrique Mendoza, University of Maryland Coauthor: Javier Bianchi, University of Wisconsin

12:15-1:30 P.M. Lunch

Session III

1:30-2:30 P.M. "Qualitative Easing: How it Works and Why it Matters"
Presenter: Roger Farmer, UCLA

2:30-2:45 P.M. Coffee Break

Session IV

2:45-3:45 P.M. "Costly Labor Adjustment: Effects of China's Employment Regulations" Presenter: Russ Cooper, European University Institute Coauthors: Guan Gong, Shanghai University and Ping Yan, Peking University

3:45 P.M. Adjourn

Monday, October 08, 2012

'Trimmed-Mean Inflation Statistics'

Preliminary evidence from Brent Meyer and Guhan Venkatu of the Cleveland Fed shows that the median CPI is a robust measure of underlying inflation trends:

Trimmed-Mean Inflation Statistics: Just Hit the One in the Middle Brent Meyer and Guhan Venkatu: This paper reinvestigates the performance of trimmed-mean inflation measures some 20 years since their inception, asking whether there is a particular trimmed-mean measure that dominates the median CPI. Unlike previous research, we evaluate the performance of symmetric and asymmetric trimmed-means using a well-known equality of prediction test. We find that there is a large swath of trimmed-means that have statistically indistinguishable performance. Also, while the swath of statistically similar trims changes slightly over different sample periods, it always includes the median CPI—an extreme trim that holds conceptual and computational advantages. We conclude with a simple forecasting exercise that highlights the advantage of the median CPI relative to other standard inflation measures.

In the introduction, they add:

In general, we find aggressive trimming (close to the median) that is not too asymmetric appears to deliver the best forecasts over the time periods we examine. However, these “optimal” trims vary slightly across periods and are never statistically superior to the median CPI. Given that the median CPI is conceptually easy for the public to understand and is easier to reproduce, we conclude that it is arguably a more useful measure of underlying inflation for forecasters and policymakers alike.

And they conclude the paper with:

While we originally set out to find a single superior trimmed-mean measure, we could not conclude as such. In fact, it appears that a large swath of candidate trims hold statistically indistinguishable forecasting ability. That said, in general, the best performing trims over a variety of time periods appear to be somewhat aggressive and almost always include symmetric trims. Of this set, the median CPI stands out, not for any superior forecasting performance, but because of its conceptual and computational simplicity—when in doubt, hit the one in the middle.
Interestingly, and contrary to Dolmas (2005) we were unable to find any convincing evidence that would lead us to choose an asymmetric trim. While his results are based on components of the PCE chain-price index, a large part (roughly 75% of the initial release) of the components comprising the PCE price index are directly imported from the CPI. It could be the case that the imputed PCE components are creating the discrepancy. The trimmed-mean PCE series currently produced by the Federal Reserve Bank of Dallas trims 24 percent from the lower tail and 31 percent from the upper tail of the PCE price-change distribution. This particular trim is relatively aggressive and is not overly asymmetric—two features consistent with the best performing trims in our tests.
Finally, even though we failed to best the median CPI in our first set of tests, it remains the case that the median CPI is generally a better forecaster of future inflation over policy-relevant time horizons (i.e. inflation over the next 2-3 years) than the headline and core CPI.

One note. They are not saying that trimmed or median statistics are the best way to measure the cost of living for a household. They are asking what variable has the most predictive power for future (untrimmed, non-core, i.e. headline) inflation ("specifically the annualized percent change in the headline CPI over the next 36 months," though the results for 24 months are similar). That turns out, in general, to be the median CPI.

Thursday, October 04, 2012

'Economic Research vs. the Blogosphere'

One more quick one. Acemoglu and Robinson respond to a recent post that appeared here (and posts by others too, their point three responds to my comments):

Economic Research vs. the Blogosphere: Our new working paper, co-authored with Thierry Verdier, received an unexpected amount of attention from the blogosphere — unfortunately, most of it negative. The paper can be found here, and some of the more interesting reactions are here, here and here . A fairly balanced and insightful summary of several of the comments can be found here.
We are surprised and intrigued. This is the first time, to the best of our knowledge, that one of our papers, a theoretical one at that, has become such a hot button issue. Upon reflection, we think this says not so much about the paper but about ideology and lack of understanding by many of what economic research is — or should be — about. So this gives us an opportunity to ruminate on these matters. ...[continue reading]...

Wednesday, October 03, 2012

The Effects of Medicaid Eligibility

This is from the NBER:
Saving Teens: Using a Policy Discontinuity to Estimate the Effects of Medicaid Eligibility, by Bruce D. Meyer, Laura R. Wherry, NBER Working Paper No. 18309, Issued in August 2012: [Open Link to Paper]: This paper uses a policy discontinuity to identify the immediate and long-term effects of public health insurance coverage during childhood. Our identification strategy exploits a unique feature of several early Medicaid expansions that extended eligibility only to children born after September 30, 1983. This feature resulted in a large discontinuity in the lifetime years of Medicaid eligibility of children at this birthdate cutoff. Those with family incomes at or just below the poverty line had close to five more years of eligibility if they were born just after the cutoff than if they were born just before. We use this discontinuity in eligibility to measure the impact of public health insurance on mortality by following cohorts of children born on either side of this cutoff from childhood through early adulthood. We examine changes in rates of mortality by the underlying causes of death, distinguishing between deaths due to internal and external causes. We also examine outcomes separately for black and white children. Our analysis shows that black children were more likely to be affected by the Medicaid expansions and gained twice the amount of eligibility as white children. We find a substantial effect of public eligibility during childhood on the later life mortality of black children at ages 15-18. The estimates indicate a 13-18 percent decrease in the internal mortality rate of black teens born after September 30, 1983. We find no evidence of an improvement in the mortality of white children under the expansions.

I'll let people connect their own dots, if they think it's appropriate, between who is helped and who is not and the current debate over Medicaid funding.

Friday, September 07, 2012

Recent Developments in CEO Compensation

Thursday, August 09, 2012

Monetary Policy and Inequality in the U.S.

I need to read this paper:

Innocent Bystanders? Monetary Policy and Inequality in the U.S., by Olivier Coibion, Yuriy Gorodnichenko, Lorenz Kueng, and John Silvia, NBER Working Paper No. 18170, Issued in June 2012 [open link]: Abstrct We study the effects and historical contribution of monetary policy shocks to consumption and income inequality in the United States since 1980. Contractionary monetary policy actions systematically increase inequality in labor earnings, total income, consumption and total expenditures. Furthermore, monetary shocks can account for a significant component of the historical cyclical variation in income and consumption inequality. Using detailed micro-level data on income and consumption, we document the different channels via which monetary policy shocks affect inequality, as well as how these channels depend on the nature of the change in monetary policy.

And, part of the conclusion:

VI Conclusion Recent events have brought both monetary policy and economic inequality to the forefront of policy issues. At odds with the common wisdom of mainstream macroeconomists, a tight link between the two has been suggested by a number of people, ranging widely across the political spectrum from Ron Paul and Austrian economists to Post-Keynesians such as James Galbraith. But while they agree on a causal link running from monetary policy actions to rising inequality in the U.S., the suggested mechanisms vary. Ron Paul and the Austrians emphasize inflationary surprises lowering real wages in the presence of sticky prices and thereby raising profits, leading to a reallocation of income from workers to capitalists. In contrast, post-Keynesians emphasize the disinflationary policies of the Federal Reserve and their disproportionate effects on employment and wages of those at the bottom end of the income distribution.
We shed new light on this question by assessing the effects of monetary policy shocks on consumption and income inequality in the U.S. Contractionary monetary policy shocks appear to have significant long-run effects on inequality, leading to higher levels of income, labor earnings, consumption and total expenditures inequality across households, in direct contrast to the directionality advocated by Ron Paul and Austrian economists. Furthermore, while monetary policy shocks cannot account for the trend increase in income inequality since the early 1980s, they appear to have nonetheless played a significant role in cyclical fluctuations in inequality and some of the longer-run movements around the trends. This is particularly true for consumption inequality, which is likely the most relevant metric from a policy point of view, and expenditure inequality after changes in the target inflation rate. To the extent that distributional considerations may have first-order welfare effects, our results point to a need for models with heterogeneity across households which are suitable for monetary policy analysis. While heterogeneous agent models with incomplete insurance markets have become increasingly common in the macroeconomics literature, little effort has, to the best of our knowledge, yet been devoted to considering their implications for monetary policy. In light of the empirical evidence pointing to non-trivial effects of monetary policy on economic inequality, this seems like an avenue worth developing further in future research. ...
Finally, the sensitivity of inequality measures to monetary policy actions points to even larger costs of the zero-bound on interest rates than is commonly identified in representative agent models. Nominal interest rates hitting the zero-bound in times when the central bank’s systematic response to economic conditions calls for negative rates is conceptually similar to the economy being subject to a prolonged period of contractionary monetary policy shocks. Given that such shocks appear to increase income and consumption inequality, our results suggest that standard representative agent models may significantly understate the welfare costs of zero-bound episodes.

Saturday, July 14, 2012

It was Mostly the Fall in Demand

Watching Amir Sufi give this paper arguing that a fall in aggregate demand rather than uncertainty, structurual change, and so forth is the major reason for the fall in employment (with the implication that replacing the lost demand can help the recovery):

What Explains High Unemployment? The Aggregate Demand Channel, by Atif Mian, University of California, Berkeley and NBER Amir Sufi University of Chicago Booth School of Business and NBER, November 2011: Abstract A drop in aggregate demand driven by shocks to household balance sheets is responsible for a large fraction of the decline in U.S. employment from 2007 to 2009. The aggregate demand channel for unemployment predicts that employment losses in the non-tradable sector are higher in high leverage U.S. counties that were most severely impacted by the balance sheet shock, while losses in the tradable sector are distributed uniformly across all counties. We find exactly this pattern from 2007 to 2009. Alternative hypotheses for job losses based on uncertainty shocks or structural unemployment related to construction do not explain our results. Using the relation between non-tradable sector job losses and demand shocks and assuming Cobb-Douglas preferences over tradable and non-tradable goods, we quantify the effect of aggregate demand channel on total employment. Our estimates suggest that the decline in aggregate demand driven by household balance sheet shocks accounts for almost 4 million of the lost jobs from 2007 to 2009, or 65% of the lost jobs in our data.

And, from the conclusion:

Alternative hypotheses such as business uncertainty and structural adjustment of the labor force related to construction are less consistent with the facts. The argument that businesses are holding back hiring because of regulatory or financial uncertainty is difficult to reconcile with the strong cross-sectional relation between household leverage levels, consumption, and employment in the non-tradable sector. This argument is also difficult to reconcile with survey evidence from small businesses and economists saying that lack of product demand has been the primary worry for businesses throughout the recession (Dennis (2010), Izzo (2011)).
There is certainly validity to the structural adjustment argument given large employment losses associated with the construction sector. However, we show that the leverage ratio of a county is a far more powerful predictor of total employment losses than either the growth in construction employment during the housing boom or the construction share of the labor force as of 2007. Further, using variation across the country in housing supply elasticity, we show that the aggregate demand hypothesis is distinct from the construction collapse view. Finally, structural adjustment theories based on construction do not explain why employment has declined sharply in industries producing tradable goods even in areas that experienced no housing boom.

Have Blog, Will Travel

I am here today.

Wednesday, June 13, 2012

Does Inequality Lead to a Financial Crisis?

Via an email, more on inequality and crises:

Does Inequality Lead to a Financial Crisis?, by Michael D. Bordo and Christopher M. Meissner: Abstract: The recent global crisis has sparked interest in the relationship between income inequality, credit booms, and financial crises. Rajan (2010) and Kumhof and Rancière (2011) propose that rising inequality led to a credit boom and eventually to a financial crisis in the US in the first decade of the 21st century as it did in the 1920s. Data from 14 advanced countries between 1920 and 2000 suggest these are not general relationships. Credit booms heighten the probability of a banking crisis, but we find no evidence that a rise in top income shares leads to credit booms. Instead, low interest rates and economic expansions are the only two robust determinants of credit booms in our data set. Anecdotal evidence from US experience in the 1920s and in the years up to 2007 and from other countries does not support the inequality, credit, crisis nexus. Rather, it points back to a familiar boom-bust pattern of declines in interest rates, strong growth, rising credit, asset price booms and crises.

Monday, May 28, 2012

Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?

A new paper from a colleague (along with coauthors, Jess Benhabib and Seppo Honkapohja):

Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?, by Jess Benhabib, George W. Evans, and Seppo Honkapohja, NBER Working Paper No. 18114, Issued in May 2012: We examine global dynamics under infinite-horizon learning in New Keynesian models where the interest-rate rule is subject to the zero lower bound. As in Evans, Guse and Honkapohja (2008), the intended steady state is locally but not globally stable. Unstable deflationary paths emerge after large pessimistic shocks to expectations. For large expectation shocks that push interest rates to the zero bound, a temporary fiscal stimulus or a policy of fiscal austerity, appropriately tailored in magnitude and duration, will insulate the economy from deflation traps. However "fiscal switching rules" that automatically kick in without discretionary fine tuning can be equally effective.

However, for austerity to work "requires the fiscal austerity period to be sufficiently long, and the degree of initial pessimism in expectations to be relatively mild." That is, the policy must be left in place for a considerable period of time, and if there is expected deflation or an expected decline in output of sufficient magnitude, austerity is unlikely to be effective. The conditions for fiscal stimulus to work are not as stringent, so it is more likely to be effective, but even so "One disadvantage of fiscal stimulus and fiscal austerity policies is that both their magnitude and duration have to be tailored to the initial expectations, so they require swift and precise discretionary action."

Because of this, they suggest fiscal switching as the best policy. Under this policy the government keeps government spending (and taxes) constant so long as expected inflation exceeds a predetermined lower bound. But if expected inflation falls below the threshold, then government spending is increased enough to achieve an output level where actual inflation exceeds expected inflation. Thus, a rule that credibly promises strong fiscal action if expectations become pessimistic can avoid the bad equilibrium in these models. As they note:

Two further points should be noted about this form of …fiscal policy. First, it is not necessary to decide in advance the magnitude and duration of the …fiscal stimulus. Second, in contrast to the preceding section we now do not assume that agents know the future path of government spending.

In summary, our analysis suggests that one policy that might be used to combat stagnation and de‡ation, in the face of pessimistic expectations, would consist of a …fiscal switching rule combined with a Taylor-type rule for monetary policy. The fi…scal switching rule applies when in‡ation expectations falls below a critical value. The rule speci…fies increased government spending to raise infl‡ation above in‡ation expectations in order to ensure that in‡ation is gradually increased until expected in‡ation exceeds the critical threshold. This part of the policy eliminates the unintended steady state and makes sure that the economy does not get stuck in a regime of defl‡ation and stagnation. Furthermore, unlike the temporary fi…scal policies discussed in the previous section, the switching rules do not require …fine tuning and are triggered automatically. Remarkably, our simulations indicate that this combination of policies is successful regardless of whether the households are Ricardian or non-Ricardian.

[open link to paper]

Thursday, March 29, 2012

"Macroeconomics with Heterogeneity: A Practical Guide"

This is a bit on the wonkish side, but since I've talked a lot about the difficulties that heterogeneous agents pose in macroeconomics, particularly for aggregation, I thought I should note this review of models with heterogeneous agents:

Macroeconomics with Heterogeneity: A Practical Guide, by Fatih Guvenen, Economic Quarterly, FRB Richmond: This article reviews macroeconomic models with heterogeneous households. A key question for the relevance of these models concerns the degree to which markets are complete. This is because the existence of complete markets imposes restrictions on (i) how much heterogeneity matters for aggregate phenomena and (ii) the types of cross-sectional distributions that can be obtained. The degree of market incompleteness, in turn, depends on two factors: (i) the richness of insurance opportunities provided by the economic environment and (ii) the nature and magnitude of idiosyncratic risks to be insured. First, I review a broad collection of empirical evidence—from econometric tests of "full insurance," to quantitative and empirical analyses of the permanent income ("self-insurance") model that examine how it fits the facts about life-cycle allocations, to studies that try to directly measure where economies place between these two benchmarks ("partial insurance"). The empirical evidence I survey reveals significant uncertainty in the profession regarding the magnitudes of idiosyncratic risks, as well as whether or not these risks have increased since the 1970s. An important difficulty stems from the fact that inequality often arises from a mixture of idiosyncratic risk and fixed (or predictable) heterogeneity, making the two challenging to disentangle. Second, I discuss applications of incomplete markets models to trends in wealth, consumption, and earnings inequality both over the life cycle and over time, where this challenge is evident. Third, I discuss "approximate" aggregation—the finding that some incomplete markets models generate aggregate implications very similar to representative-agent models. What approximate aggregation does and does not imply is illustrated through several examples. Finally, I discuss some computational issues relevant for solving and calibrating such models and I provide a simple yet fully parallelizable global optimization algorithm that can be used to calibrate heterogeneous agent models. View Full Article.

Thursday, March 22, 2012

"The Macroeconomic Effects of FOMC Forward Guidance"

I've been trying to figure out whether the Fed's declaration that it would maintain exceptionally low rates through late 2014 represents a conditional or unconditional statement. That is, if the economy improves faster than expected, will the Fed raise rates prior to that time? Or will it honor this as a firm commitment that is independent of the actual evolution of the economy?

The statement clearly leaves wiggle room -- if the Fed wants out of the commitment the language is there. But I have the impression that the public views it as a firm, unconditional commitment and if the Fed backs away it will be seem as breaking a promise (i.e. lose credibility).

Apparently, I'm not the only one who is unsure about this. This is from Jeffrey R. Campbell, Charles L. Evans, Jonas D.M. Fisher, and Alejandro Justiniano (Charles Evans is the president of the Chicago Fed). They look at the effectiveness and viability of the two types of forward guidance, and conclude that a firm commitment with an escape clause specified as a specific rule (e.g. won't raise rates until until unemployment falls below 7% or inflation expectation rise above 3%) can work well:

Macroeconomic Effects of FOMC Forward Guidance, by Jeffrey R. Campbell, Charles L. Evans, Jonas D.M. Fisher, and Alejandro Justiniano, March 14, 2012, Conference Draft: 1 Introduction Since the onset of the financial crisis, Great Recession and modest recovery, the Federal Reserve has employed new language and tools to communicate the likely nature of future monetary policy accommodation. The most prominent developments have manifested themselves in the formal statement that follows each meeting of the Federal Open Market Committee (FOMC). In December 2008 it said "the Committee anticipates that weak economic conditions are likely to warrant exceptionally low levels of the federal funds rate for some time." In March 2009, when the first round of large scale purchases of Treasury securities was announced, "extended period" replaced "some time." In the face of a modest recovery, the August 2011 FOMC statement gave specificity to "extended period" by anticipating exceptionally low rates "at least as long as mid-2013." The January 2012 FOMC statement lengthened the anticipated period of exceptionally low rates even further to "late 2014." These communications are referred to as forward guidance.
The nature of this most recent forward guidance is the subject of substantial debate. Is "late 2014" an unconditional promise to keep the funds rate at the zero lower bound (ZLB) beyond the time policy would normally involve raising the federal funds rate? ... Alternatively, is "late 2014" simply conditional guidance based upon the sluggish economic activity and low inflation expected through this period? ...
Our paper sheds light on these issues and the potential role of forward guidance in the current policy environment. Motivated by the competing interpretations of "late 2014," we distinguish between two kinds of forward guidance. Odyssean forward guidance changes private expectations by publicly committing the FOMC to future deviations from its underlying policy rule. Circumstances will tempt the FOMC to renege on these promises precisely because the policy rule describes its preferred behavior. Hence this kind of forward guidance resembles Odysseus commanding his sailors to tie him to the ship's mast so that he can enjoy the Sirens' music.
All other forward guidance is Delphic in the sense that it merely forecasts the future. Delphic forward guidance encompasses statements that describe only the economic outlook and typical monetary policy stance. Such forward guidance about the economic outlook influences expectations of future policy rates only by changing market participants views about likely outcomes of variables that enter the FOMC's policy rule. ...
The monetary policies elucidated by Krugman (1999), Eggertsson and Woodford (2003) and Werning (2012) rely on Odyssean forward guidance, and these have inspired several policy proposals for providing more accommodation at the ZLB. The more aggressive policy alternatives proposed include Evans's (2012) state-contingent price-level targeting, nominal income-targeting as advocated by Romer (2011), and conditional economic thresholds for exiting the ZLB proposed by Evans (2011). These proposals' benefits depend on the effectiveness of FOMC communications in influencing expectations. Fortunately, there exists historical precedent with which we can assess whether FOMC forward guidance has actually had an impact. The FOMC has been using forward guidance implicitly through speeches or explicitly through formal FOMC statements since at least the mid-1990s. Language of one form or another describing the expected future stance of policy has been a fixture of FOMC statement language since May 1999. The first part of this paper uses data from this period as well as from the crisis period to answer two key questions. Do markets listen? When they do listen, do they hear the oracle of Delphi forecasting the future or Odysseus binding himself to the mast?
Our examination of whether markets are listening to forward guidance builds on prior work... We find results that are similar to, if not even stronger than, those of Gurkaynak et al. (2005). That is, we confirm that during and after the crisis, FOMC statements have had significant affects on long term Treasuries and also corporate bonds and that these effects appear to be driven by forward guidance.
Studying federal funds futures rates during the day FOMC statements are released identifies forward guidance, but does not disentangle its Odyssean and Delphic components. ... To answer our second key question, we develop a framework for measuring forward guidance based on a traditional interest rate rule that identifies only Odyssean forward guidance. ... We highlight here two results. First, the FOMC telegraphs most of its deviations from the interest rate rule at least one quarter in advance. Second, the Odyssean forward guidance successfully signaled that monetary accommodation would be provided much more quickly than usual and taken back more quickly during the 2001 recession and its aftermath. Overall, our empirical work provides evidence that the public has at least some experience with Odyssean forward guidance, so the monetary policies that rely upon it should not appear entirely novel.
The second part of the present paper investigates the consequences the Odyssean forward put in place with the "late 2014" statement language. On the one hand this language resembles the policy recommendations of Eggertsson and Woodford (2003) and could be the right policy for an economy struggling to emerge from a liquidity trap. On the other hand there are legitimate concerns that this forward guidance places the FOMC's mandated price stability goal at risk. We consider the plausibility of these clashing views by forecasting the path of the economy with the present forward guidance and subjecting that forecast to two upside risks: higher inflation expectations and faster deleveraging. ...
Evans (2011) has proposed conditioning the FOMC's forward guidance on outcomes of unemployment and inflation expectations. His proposal involves the FOMC announcing specific conditions under which it will begin lifting its policy rate above zero: either unemployment falling below 7 percent or expected inflation over the medium term rising above 3 percent. We refer to this as the 7/3 threshold rule. It is designed to maintain low rates even as the economy begins expanding on its own (as prescribed by Eggertsson and Woodford (2003)), while providing safeguards against unexpected developments that may put the FOMCs price stability mandate in jeopardy. Our policy analysis suggests that such conditioning, if credible, could be helpful in limiting the inflationary consequences of a surge in aggregate demand arising from an early end to the post-crisis deleveraging.

Friday, March 16, 2012

FRBSF: Structural and Cyclical Elements in Macroeconomics

I am here today:

Structural and Cyclical Elements in Macroeconomics
Federal Reserve Bank of San Francisco
Janet Yellen Conference Center, First Floor
March 16, 2012

AGENDA

Morning Session Chair: John Fernald, Federal Reserve Bank of San Francisco
8:10 A.M. Continental Breakfast
8:50 A.M. Welcoming Remarks: John Williams, Federal Reserve Bank of San Francisco
9:00 A.M. Jinzu Chen, International Monetary Fund, Prakash Kannan, International Monetary Fund, Prakash Loungani, International Monetary Fund   Bharat Trehan, Federal Reserve Bank of San Francisco New Evidence on Cyclical and Structural Sources of Unemployment (PDF - 462KB)  
Discussants: Steven Davis, University of Chicago Booth School of Business, Valerie Ramey, University of California, San Diego
10:20 A.M. Break
10:40 A.M. Robert Hall, Stanford University   Quantifying the Forces Leading to the Collapse of GDP after the Financial Crisis (PDF - 826KB)   Discussants: Antonella Trigari, Università Bocconi, Roger Farmer, University of California, Los Angeles
12:00 P.M. Lunch – Market Street Dining Room, Fourth Floor  
Afternoon Session Chair: Eric Swanson, Federal Reserve Bank of San Francisco
1:15 P.M. Charles Fleischman, Federal Reserve Board, John Roberts, Federal Reserve Board, From Many Series, One Cycle: Improved Estimates of the Business Cycle from a Multivariate Unobserved Components Model (PDF - 302KB)  
Discussants: Carlos Carvalho, Pontificia Universidade Católica, Rio de Janeiro, Ricardo Reis, Columbia University
2:35 P.M. Break
2:50 P.M. Christopher Carroll, Johns Hopkins University, Jiri Slacalek, European Central Bank, Martin Sommer, International Monetary Fund   Dissecting Saving Dynamics: Measuring Credit, Wealth, and Precautionary
Effects (PDF - 1.18MB)
 
Discussants: Karen Dynan, Brookings Institution, Gauti Eggertsson, Federal Reserve Bank of New York
4:10 P.M. Break
4:25 P.M. Andreas Fuster, Harvard University, Benjamin Hebert, Harvard University, David Laibson, Harvard University Natural Expectations, Macroeconomic Dynamics, and Asset Pricing (PDR - 663KB)  
Discussants: Yuriy Gorodnichenko, University of California, Berkeley, Stefan Nagel, Stanford Graduate School of Business
5:45 P.M. Reception – West Market Street Lounge, Fourth Floor
6:30 P.M. Dinner – Market Street Dining Room, Fourth Floor, Introduction: John Williams, Federal Reserve Bank of San Francisco, Speaker: Peter Diamond, Massachusetts Institute of Technology Unemployment and Debt

Sunday, February 05, 2012

"Prospects for Nuclear Power"

Via the NBER:

Prospects for Nuclear Power, by Lucas W. Davis, NBER Working Paper No. 17674, December 2011: The prospects for a revival of nuclear power were dim even before the partial reactor meltdowns at the Fukushima nuclear plant. Nuclear power has long been controversial because of concerns about nuclear accidents, proliferation risk, and the storage of spent fuel. These concerns are real and important. In addition, however, a key challenge for nuclear power has been the high cost of construction for nuclear plants. Construction costs are high enough that it becomes difficult to make an economic argument for nuclear, even before incorporating these external costs. This is particularly true in countries like the United States where recent technological advances have dramatically increased the availability of natural gas.

[open link to paper.]

Friday, February 03, 2012

NBER EF&G Research Meeting

I am here today:

NBER EF&G Research Meeting
Nir Jaimovich and Guido Lorenzoni, Organizers
February 3, 2012
Federal Reserve Bank of San Francisco

PROGRAM

8:30 am Continental Breakfast

9:00 am Gary Gorton, Yale University Guillermo Ordonez, Yale University Collateral Crises Discussant:  Veronica Guerrieri, University of Chicago

10:00 am Coffee Break

10:30 am Elias Albagli, University of Southern California Christian Hellwig, Toulouse School of Economics Aleh Tsyvinski, Yale University A Theory of Asset Prices based on Heterogeneous Information Discussant:  Laura Veldkamp, New York University

11:30 am Francisco Buera, Federal Reserve Bank of Minneapolis Joseph Kaboski, University of Notre Dame Yongseok Shin, Washington University in St. Louis The Macroeconomics of Microfinance Discussant:  Abhijit Banerjee, MIT

12:30 pm Lunch

1:30 pm Cosmin Ilut, Duke University Martin Schneider, Stanford University Ambiguous Business Cycles Discussant:  Lars Hansen, University of Chicago

2:30 pm Coffee Break

3:00 pm Ulrike Malmendier, University of California at Berkeley Stefan Nagel, Stanford University Learning from Inflation Experiences Discussant:  Monika Piazzesi, Stanford University

4:00 pm Greg Kaplan, University of Pennsylvania Giovanni Violante, New York University A Model of the Consumption Response to Fiscal Stimulus Payments Discussant:  Ricardo Reis, Columbia University

5:00 pm Adjourn

5:15 pm Reception and Dinner

Tuesday, January 31, 2012

"Changing Inequality in U.S. College Entry and Completion"

From the NBER:

Gains and Gaps: Changing Inequality in U.S. College Entry and Completion, by Martha J. Bailey, Susan M. Dynarski, NBER Working Paper No. 17633, December 2011: [open link] We describe changes over time in inequality in postsecondary education using nearly seventy years of data... We find growing gaps between children from high- and low-income families in college entry, persistence, and graduation. Rates of college completion increased by only four percentage points for low-income cohorts born around 1980 relative to cohorts born in the early 1960s, but by 18 percentage points for corresponding cohorts who grew up in high-income families. Among men, inequality in educational attainment has increased slightly since the early 1980s. But among women, inequality in educational attainment has risen sharply, driven by increases in the education of the daughters of high-income parents. Sex differences in educational attainment, which were small or nonexistent thirty years ago, are now substantial, with women outpacing men in every demographic group. The female advantage in educational attainment is largest in the top quartile of the income distribution. These sex differences present a formidable challenge to standard explanations for rising inequality in educational attainment.

There's a more extended summary of the results in the conclusion to the paper.

Thursday, January 05, 2012

Tax Evasion and Investment, Trade and Greenhouse Gases, Football and Grades

Travel day today, so it seems as good a day as any to have a post featuring research by colleagues. First, Bruce Blonigen and Nick Sly (and a coauthor, Lindsay Oldenski):

The Growing International Campaign Against Tax Evasion

The growing international campaign against tax evasion, by Bruce Blonigen, Lindsay Oldenski, and Nicholas Sly, Vox EU: The most recent G20 summit led to a multilateral agreement to facilitate information sharing between tax agencies, with the US currently negotiating bilateral tax treaties with the tax havens of Switzerland and Luxembourg. But before celebrations begin, this column points out that cracking down on tax evasion comes at a cost. International investment may well suffer.
One of the few solid agreements that came out of the latest G20 summit in Cannes was that governments will increase their cooperative efforts to curb tax evasion. The agreement, called the Convention on Mutual Administrative Assistance in Tax Matters, allows national tax agencies to request greater amounts of information from foreign governments on the activity of multinational enterprises and private citizens that are otherwise outside their authority to monitor. Under the new agreement, countries can choose voluntarily to transmit tax information about foreign parties in bulk to their resident country’s tax agency. There are also provisions of the convention that will require nations to assist in the recovery of foreign tax claims if a business or individual is in noncompliance.
Several leaders of G20 nations cited reports from the OECD that recent efforts to reduce tax evasion have resulted in more than $14 billion of additional tax revenue being collected, with hints that there are much greater amounts of offshore tax liabilities yet to be collected. With mounting government debt in most nations, the incentives for them to reduce tax evasion are clear.  But if we take a closer look, this may be just the next step in the ongoing efforts of developed countries to recapture lost revenues by multinational firms.  In particular, most bilateral tax treaties include similar requirements for cooperation in sharing of tax information between the two governments. The signing and renegotiation of tax treaties has proliferated in recent decades and Easson (2000) reports that there are nearly 2,500 treaties in force worldwide. The current US activity on tax treaties is also telling.  The US Senate has pending agreements with Switzerland and Luxembourg, two countries that are typically on lists of tax havens, and in June of this year the US Treasury Department announced a plan to renegotiate its tax treaty with Japan, where provisions for information sharing are relatively weak. Deterring tax evasion has long been a priority for governments in coordinating the international tax system.
Despite the recent attention, information-sharing provisions of tax treaties are typically not the first attributes touted. The stated goal of the OECD and UN model for tax treaties is to limit the incidence of double taxation and promote efficient flows of capital in the world economy through the coordination of tax rules and definitions.  For this reason, prior studies of the effect of tax treaties on the FDI activity of multinational enterprises have expected to find positive impacts.
Yet finding systematic evidence of such positive effects has proven elusive. Di Giovanni (2005) fails to observe any significant impact of tax treaties on cross-border mergers and acquisitions, while Louie and Rousslang (2008) find no evidence that tax treaties affect US firms’ required rates of return from their foreign affiliates. Likewise, Blonigen and Davies (2004, 2005) do not find any discernible effect of tax treaties on US and OECD FDI activity. There is some evidence provided by Davies et al (2009) that the number of firms entering a foreign country grows once a new treaty is signed. But many more studies support the conclusion that foreign investment flows do not appear to take advantage of the double-taxation relief afforded by tax treaties.
The lack of evidence that tax treaties impact foreign investment flows between treaty partners is surprising because there is a clear relationship between foreign capital flows and tax rates. Papke (2000) finds that the elasticity of reported foreign earnings to differences in withholding taxes rates is near -1, indicating a tight relationship between the location of reported income and the tax liabilities across countries. Furthermore, Hines and Rice (1994) find that real aspects of multinational firm operations, such as the location of production, employment, and equipment purchases, also respond to differences in tax rates across countries.
In recent work (Blonigen et al 2011) we provide an answer to the puzzling insignificant effects of tax treaties and find it is rooted in the tax-sharing provisions of tax treaties, which are intended to reduce tax evasion and, thus, can have a negative influence on FDI activity. Our premise is that firms in industries which use relatively homogeneous inputs will be most affected by the information-sharing provisions of tax treaties, since arms-length prices for intermediate goods are easily verified in these industries once tax authorities share information and can verify activity across MNEs’ affiliates. In contrast, firms that use fairly differentiated and specialized inputs will retain a much greater ability to mitigate their tax liabilities across countries by engaging in strategic transfer pricing.
We look across more than two decades of investment activity by US multinational firms, spanning 73 different industries and more than 150 countries. During the time span of our sample, 1987–2007, the US signed several new tax treaties, and renegotiated agreements to increase the degree of information sharing with several existing treaty partners.
The evidence strongly supports that provisions for tax-information sharing, such as those included in recent G20 convention, can alter the pattern of international investment activities. For a firm with average use of homogeneous inputs, increased cooperation between national tax agencies when a tax treaty is put into place is associated with a gross reduction in the firm’s foreign affiliate sales by $26 million per year.  Looking at the US economy as a whole, this equates to an estimated reduction of outbound investment activity of $2.29 billion annually. We also find that tax-sharing provisions of tax treaties lead to gross reductions in the number of firms that choose to invest in countries for the average industry as well. 
We do estimate a positive impact of the other features of the tax-treaty agreements, which counterbalances the negative effects from the information-sharing provisions. For the average firm in our sample the average net impact of tax treaties on FDI activity is positive.  Yet, it is clear from our analysis that the negative effects of the information-sharing provisions of tax treaties are large and a main reason why prior studies have puzzlingly found little evidence for any effect of tax treaties on FDI.
Final thoughts
International policy can obviously pursue many different goals.  Our research suggests that governments may be pursuing tax treaties just as much to reduce tax evasion as to promote more efficient international capital allocation.  The recent economic troubles seems to have focused governments even more on the short-run goal of capturing tax revenues, apparent from the G20’s recent signing of the multilateral Convention on Mutual Administrative Assistance in Tax Matters. However, agreements that allow for greater information sharing between governments deter multinational enterprises from engaging in foreign investment in the first place.  It seems that the longer-run policy goal of facilitating international investment has taken a back seat in recent accords.

This is research by Jason Lindo, Glen Waddell and their student Isaac Swensen:

Are Big-Time Sports a Threat to Student Achievement?

Guys' Grades Suffer When College Football Teams Win, by Rebecca Greenfield, The Atlantic: ...college male's grades tend to go down when their university's football team wins games, new research finds. ... More victories means more celebrating which means less studying. ...
Looking at University of Oregon student transcripts over 8 years and football wins over that same period, researchers Jason M. Lindo, Isaac D. Swensen, Glen R. Waddell calculated that a 25 percent increase in the football team's winning percentage leads males to earn GPAs as if their SAT scores were 27 points lower. ...
In addition to looking at grades, the researchers also collected surveys, asking students if football success decreases study time. "24 percent of males report that athletic success either 'Definitely' or 'Probably' decreases their study time, compared to only 9 percent of females," ... leading them to attribute the grade drop to partying. ...

Finally, research by Anca Cristea (along with two coauthors):

Trade and Greenhouse-Gas Emissions

Trade and greenhouse-gas emissions: How important is international transport?, by Anca Cristea, David Hummels, and Laura Puzzello, Vox EU: It is well known that international trade leads to greenhouse-gas emissions but policymakers often focus their attention on the production of goods and not their shipment. This column presents findings based on a unique database that allows researchers to calculate emissions for every dollar of world trade. It suggests that international transport emissions warrant serious attention in current climate-change negotiations.
As the first commitment period of the Kyoto Protocol comes to an end in 2012, member countries of the UN’s Framework Convention on Climate Change (UNFCCC) are meeting in Durban, South Africa, to decide on future actions to curb worldwide greenhouse-gas emissions.
International transport is absent from existing agreements on climate change, and negotiations to include this sector in carbon balances are progressing slowly. Differences in the willingness to regulate the greenhouse-gas emissions from international transport became apparent just a few weeks ago, when air carriers and officials around the world reacted strongly against the EU’s decision to include the aviation sector in its emission-trading scheme (Krukowska 2011).
One of the main difficulties in regulating emissions from international transport is the paucity of data on their magnitude and incidence. The little we know about these emissions comes from the ‘life-cycle analysis’ of very specific products such as Kenyan cut-flower exports. Unfortunately it is difficult to extrapolate from these highly detailed case studies to a systematic evaluation of transport emissions in trade.
In a recent paper (Cristea et al 2011) we provide such an evaluation. The key to our analysis was building a database on how goods move; for every product and country pair we track the share of trade that goes by air, ocean, rail, or truck. This allows us to calculate the transportation services (kg-km of cargo moved), and associated GHG emissions, for every dollar of trade worldwide. Combined with data on GHG emissions from production we can calculate total emissions embodied in exports.
International transport is a significant share of trade-related emissions
In our baseline year of 2004, international freight transport generated 1,205 million tonnes of CO2-equivalent emissions, or 146 grammes of CO2 per dollar of trade. By comparison, production of those traded goods generated 300 grammes per dollar of trade, meaning that international transport is responsible for one third of trade-related emissions.
The aggregate numbers understate the importance of transport for many products. Figure 1 shows the share of transport in trade-related emissions, and it varies significantly over industries. At the low end are bulk products (agriculture, mining), and at the high end are manufactured goods. For important categories such as transport equipment, electronics, and machinery, transport is responsible for over 75% of trade-related emissions. Relatively rapid growth in these industries means that transport emissions will loom ever larger in trade.
Once we include transport, clean producers look dirty
Table 1 provides calculations of output and transport emissions per dollar of trade and shows large differences between regions in emission intensities. Differences in output emissions are driven largely by the commodity composition of trade, with manufacturing-oriented exporters at the low end. Less known and perhaps more surprising are the large differences in transport emissions. The transportation of US exports is nearly eight times more emissions-intensive than the transportation of Chinese exports, and six times more emissions-intensive than Europe.

Table 1. Output and transport emission shares and intensities, by region and country

Anca1

Note: Total emissions per dollar are calculated as the sum of transport and output emission intensities. *For comparability with transport emissions, output emissions are constructed as a weighted average of sector level output emissions, using trade rather than output weights.

Accounting for transport significantly changes our perspective on which regions have “dirty”, or emissions-intensive trade. India’s production of traded goods generates 143% more emissions per dollar of trade than the US, but after incorporating transportation, its exports are less emissions-intensive in total.
We also see a strong imbalance in transport emission intensities between imports and exports. This is a critical issue for mechanism design when regulating emissions. Do international transport emissions ‘belong’ to the exporter, or to the importer? Given the imbalance shown here, the US would presumably prefer an import-based allocation while East Asian countries would prefer the opposite.
The value of trade is a poor indicator of associated transport emissions
To understand the differences across products and regions shown above, we must recognise that transport emissions depend on the scale and composition of trade. Intuitively, as countries trade more they employ more transportation services and emit more GHG. However, the partner and product composition of trade critically affect the type and quantity of transportation services (kg-km of cargo) employed. When France imports from Japan rather than Germany, a dollar of trade must travel much longer distances. A dollar of steel weighs vastly more than a dollar of microchips, requiring greater fuel (and emissions) to lift. And the choice to use aviation rather than maritime transport involves as much as a factor 100 increase in emissions to move the same cargo. This last fact, along with the unusually large reliance on air cargo in US exports, explains why US exports are so emissions-intensive.
Trade can reduce emissions, in some cases
If two countries have similar emissions from output, then increasing trade (ie shifting from domestic production to imports) will require more international transport and higher emissions. However, if a country with high output emissions reduces production in order to import from a low-emissions country, the savings in output emissions could be enough to offset the higher transport emissions from trade. Which of these cases is most likely? We find that trade flows representing 31% of world trade by value actually are net emission reducers. This happens most commonly in those industries in the left side of Figure 1 – where output emissions are both a large fraction of trade-related emissions and very different across producers. It is much less common in manufactured goods where transport emissions dominate.

Figure 1. The contribution of transport to total trade-related emissions

Anca2

Eliminating tariff preferences will shift trade toward aviation and maritime transport
With a better understanding of the emissions associated with both output and trade we can examine how changes in trade patterns will affect trade-related emissions over time. In a final exercise we simulated likely trade growth from 2004–20 resulting from tariff liberalisation and GDP growth using a dynamic version of the GTAP model.
The trend toward preferential trade liberalisation in regional trading blocs such as the EU and NAFTA means that tariffs are lower for more proximate trading partners and especially for land-adjacent partners. Rail and truck transport dominates these trade flows. Tariff liberalisation that removes current preferences in favour of a uniform MFN structure will shift trade toward more distant partners (higher kg-km per dollar of trade) and increase the use of aviation and maritime transport. This wouldn’t necessarily raise total emissions (maritime has lower emissions than rail and trucking; aviation much more), but it does mean that a rising share of trade will be outside current monitoring efforts. Getting aviation and maritime transport emissions in the system becomes critical.
Growth in the developing world will cause international transport emissions to skyrocket
We forecast that likely changes due to tariff liberalisation would be somewhat modest but likely GDP growth will yield profound changes in output, trade, and GHG emissions. Our projections have the value of output and trade rising at similar rates, accumulating to 75-80% growth by 2020. International transport services will grow twice as fast, accumulating to 173% growth. Why? Simply put, the fastest growing countries (China, India) are located far from other large markets, and their trade requires greater transportation services.
Some propose that international aviation and maritime transport should be treated as separate entities, essentially countries unto themselves, for purposes of allocating and capping emissions. If this approach is employed, as opposed to including international transport in national allocations or simply taxing the GHG emissions from fuel use, it is difficult to see how future trade growth can be accommodated.
Summary and implications
International transport emissions are a surprisingly large fraction of trade-related emissions that will grow relatively fast as world output increases and trade shifts toward more distant partners. Policymakers must carefully consider how to include international transport emissions in protocols designed to slow emissions growth. Our emission calculations – based on the most accurate trade and transportation data available to date – provide some necessary tools to advance the policy debate.

Wednesday, December 14, 2011

"The Impact of Immigration on Native Poverty"

Immigration is not the cause of poverty:

The Impact of Immigration on Native Poverty through Labor Market Competition, by Giovanni Peri, NBER Working Paper No. 17570, November 2011: In this paper I first analyze the wage effects of immigrants on native workers in the US economy and its top immigrant-receiving states and metropolitan areas. Then I quantify the consequences of these wage effects on the poverty rates of native families. The goal is to establish whether the labor market effects of immigrants have significantly affected the percentage of "poor" families among U.S.-born individuals. I consider the decade 2000-2009 during which poverty rates increased significantly in the U.S. As a reference, I also analyze the decade 1990-2000. To calculate the wage impact of immigrants I adopt a simple general equilibrium model of productive interactions, regulated by the elasticity of substitution across schooling groups, age groups and between US and foreign-born workers. Considering the inflow of immigrants by age, schooling and location I evaluate their impact in local markets (cities and states) assuming no mobility of natives and on the US market as a whole allowing for native internal mobility. Our findings show that for all plausible parameter values there is essentially no effect of immigration on native poverty at the national level. At the local level, only considering the most extreme estimates and only in some localities, we find non-trivial effects of immigration on poverty. In general, however, even the local effects of immigration bear very little correlation with the observed changes in poverty rates and they explain a negligible fraction of them.

Monday, November 07, 2011

The Public Mission of Economics: Overcoming the Great Disconnect

This is an essay I did for the Social Science Research Council's initiative on Academia and the Public Sphere:

New Forms of Communication and the Public Mission of Economics: Overcoming the Great Disconnect

The papers that are part of the initiative examine how the connections between the public and various social science disciplines have changed over time. [Comments from Henry Farrell at Monkey Cage.]

Monday, October 24, 2011

Unemployment Insurance and Job Search in the Great Recession

New research from Jesse Rothstein shows that, contrary to what you may have heard from those who are trying to blame our economic problems on government programs rather than malfeasance on Wall Street, unemployment insurance is not the cause of the slow recovery of employment:

Unemployment Insurance and Job Search in the Great Recession, by Jesse Rothstein, NBER Working Paper No. 17534 [open link]: Nearly two years after the official end of the "Great Recession," the labor market remains historically weak. One candidate explanation is supply-side effects driven by dramatic expansions of Unemployment Insurance (UI) benefit durations, to as many as 99 weeks. This paper investigates the effect of these UI extensions on job search and reemployment. I use the longitudinal structure of the Current Population Survey to construct unemployment exit hazards that vary across states, over time, and between individuals with differing unemployment durations. I then use these hazards to explore a variety of comparisons intended to distinguish the effects of UI extensions from other determinants of employment outcomes.
The various specifications yield quite similar results. UI extensions had significant but small negative effects on the probability that the eligible unemployed would exit unemployment, concentrated among the long-term unemployed. The estimates imply that UI benefit extensions raised the unemployment rate in early 2011 by only about 0.1-0.5 percentage points, much less than is implied by previous analyses, with at least half of this effect attributable to reduced labor force exit among the unemployed rather than to the changes in reemployment rates that are of greater policy concern.

Friday, October 21, 2011

NBER Economic Fluctuations & Growth Research Meeting

I am here today:

NATIONAL BUREAU OF ECONOMIC RESEARCH, INC.

EF&G Research Meeting

October 21, 2011

Federal Reserve Bank of Chicago
230 South LaSalle Street
Chicago, Illinois

George-Marios Angeletos and Martin Schneider, Organizers

PROGRAM

THURSDAY, OCTOBER 20:

6:30 pm

Reception and Dinner - Federal Reserve Bank of Chicago

FRIDAY, OCTOBER 21:

9:00 am

Aysegul Sahin, Federal Reserve Bank of New York
Joseph Song, Columbia University
Giorgio Topa, Federal Reserve Bank of New York
Gianluca Violante, New York University
Measuring Mismatch in the U.S. Labor Market

Discussant: Robert Shimer, University of Chicago and NBER

10:00 am - Coffee Break

10:30 am

Cristina Arellano, University of Minnesota and NBER
Yan Bai, Federal Reserve Bank of Minneapolis
Patrick Kehoe, Federal Reserve Bank of Minneapolis, Princeton University, University of Minnesota and NBER
Financial Markets and Fluctuations in Uncertainty

Discussant: Andrea Eisfeldt, UCLA

11:30 am

Raghuram Rajan, University of Chicago and NBER
Rodney Ramcharan, Federal Reserve Board
The Anatomy of a Credit Crisis: The Boom and Bust in Farm Land Prices in the United States in the 1920s

Discussant: Sydney Ludvigson, New York University and NBER

12:30 pm - Lunch

1:30 pm

Per Krusell, Stockholm University and NBER
Toshihiko Mukoyama, University of Virginia
Richard Rogerson, Princeton University and NBER
Aysegul Sahin, Federal Reserve Bank of New York
Is Labor Supply Important for Business Cycles?

Discussant: Marcelo Veracierto, Federal Reserve Bank of Chicago

2:30 pm - Coffee Break

3:00 pm

Eric Sims, University of Notre Dame and NBER
Permanent and Transitory Technology Shocks and the Behavior of Hours: A Challenge for DSGE Models

Discussant: Jonas Fisher, Federal Reserve Bank of Chicago

4:00 pm

Allen Head, Queen's University
Lucy Qian Liu, IMF
Guido Menzio, University of Pennsylvania and NBER
Randall Wright, University of Wisconsin, Madison and NBER
Sticky Prices: A New Monetarist Approach

Discussant: John Leahy, New York University and NBER

Monday, October 17, 2011

Chow: Usefulness of Adaptive and Rational Expectations in Economics

Gregory Chow of Princeton on rational versus adaptive expectations:

Usefulness of Adaptive and Rational Expectations in Economics, by Gregory C. Chow: ...1. Evidence and statistical reason for supporting the adaptive expectations hypothesis ... Adaptive expectations and rational expectations are hypotheses concerning the formation of expectations which economists can adopt in the study of economic behavior. Since a substantial portion of the economic profession seems to have rejected the adaptive expectations hypothesis without sufficient reason I will provide strong econometric evidence and a statistical reason for its usefulness...
2. Insufficient evidence supporting the rational expectations hypothesis when it prevailed The popularity of the rational expectations hypothesis began with the critique of Lucas (1976) which claimed that existing macro econometric models of the time could not be used to evaluate effects of economic policy because the parameters of these econometric models would change when the government decision rule changed. A government decision rule is a part of the environment facing economic agents. When the rule changes, the environment changes and the behavior of economic agents who respond to the environment changes. Economists may disagree on the empirical relevance of this claim, e.g., by how much the parameters will change and to what extent government policies can be assumed to be decision rules rather than exogenous changes of a policy variable. The latter is illustrated by studies of the effects of monetary shocks on aggregate output and the price level using a VAR. Such qualifications aside, I accept the Lucas proposition for the purpose of the present discussion.
Then came the resolution of the Lucas critique. Assuming the Lucas critique to be valid, economists can build structural econometric models with structural parameters unchanged when a policy rule changes. Such a solution can be achieved by assuming rational expectations, together with some other modeling assumptions. I also accept this solution of the Lucas critique.
In the history of economic thought during the late 1970s, the economics profession (1) accepted the Lucas critique, (2) accepted the solution to the Lucas critique in which rational expectations is used and (3) rejected the adaptive expectations hypothesis possibly because the solution in (2) required the acceptance of the rational expectations hypothesis. Accepting (1) the Lucas critique and (2) a possible response to the Lucas critique by using rational expectations does not imply (3) that rational expectations is a good empirical economic hypothesis. There was insufficient evidence supporting the hypothesis of rational expectations when it was embraced by the economic profession in the late 1970s. This is not to say that the rational expectations hypothesis is empirically incorrect, as it has been shown to be a good hypothesis in many applications. The point is that the economic profession accepted this hypothesis for general application in the late 1970s without sufficient evidence.
3. Conclusions This paper has presented a statistical reason for the economic behavior as stated in the adaptive expectations hypothesis and strong econometric evidence supporting the adaptive expectations hypothesis. ... Secondly, this paper has pointed out that there was insufficient empirical evidence supporting the rational expectations hypothesis when the economics profession embraced it in the late 1970s. The profession accepted the Lucas (1976) critique and its possible resolution by estimating structural models under the assumption of rational expectations. But this does not justify the acceptance of rational expectations in place of adaptive expectations as better proxies for the psychological expectations that one wishes to model in the study of economic behavior. ...

Tuesday, September 06, 2011

NBER Research Summary: The Role of Household Leverage

This NBER Research Summary by Atif Mian and Amir Sufi echoes many of the arguments I've been making about balance sheet recessions. In addition, the authors argue that the trouble in mortgage markets can be traced to a "securitization-driven shift in the supply of mortgage credit," and that "the expansion in mortgage credit was more likely to be a driver of house price growth than a response to it." They also show that "non-GSE securitization primarily targeted zip codes that had a large share of subprime borrowers. In these zip codes, mortgage denial rates dropped dramatically and debt-to-income ratios skyrocketed":

Finance and Macroeconomics: The Role of Household Leverage, by Atif R. Mian and Amir Sufi, NBER Reporter 2011 Number 3, Research Summary: The increase in household leverage prior to the most recent recession was stunning by any historical comparison. From 2001 to 2007, household debt doubled, from $7 trillion to $14 trillion. The household debt-to-income ratio increased by more during these six years than it had in the prior 45 years. In fact, the household debt-to-income ratio in 2007 was higher than at any point since 1929. Our research agenda explores the causes and consequences of this tremendous rise in household debt. Why did U.S. households borrow so much and in such a short span of time? What factors triggered the slowdown and collapse of the real economy? Did household leverage amplify macroeconomic shocks and make a quick recovery less likely? How do politics constrain policy responses to an economic crisis?

While the focus of our research is on the recent U.S. economic downturn, we believe the implications of our work are wider. For example, both the Great Depression and Japan's Great Recession were preceded by sharp increases in leverage.1 We believe that understanding the impact of household debt on the economy is crucial to developing a better understanding of the linkages between finance and macroeconomics. ...[continue reading]...

Tuesday, August 09, 2011

Austerity and Anarchy

Via Kevin O'Rourke at The Irish Economy:

Austerity and Anarchy: Budget Cuts and Social Unrest in Europe, 1919-2009, by Jacopo Ponticelli and Hans-Joachim Voth, Discussion Paper No. 8513, August 2011, Centre for Economic Policy Research: Abstract Does fiscal consolidation lead to social unrest? From the end of the Weimar Republic in Germany in the 1930s to anti-government demonstrations in Greece in 2010-11, austerity has tended to go hand in hand with politically
motivated violence and social instability. In this paper, we assemble cross-country evidence for the period 1919 to the present, and examine the extent to which societies become unstable after budget cuts. The results show a clear positive correlation between fiscal retrenchment and instability. We test if the relationship simply reflects economic downturns, and conclude that this is not the key factor. We also analyze interactions with various economic and political variables. While autocracies and democracies show a broadly similar responses to budget cuts, countries with more constraints on the executive are less likely to see unrest as a result of austerity measures. Growing media penetration does not lead to a stronger effect of cut-backs on the level of unrest.

Wednesday, August 03, 2011

"Animal Spirits, Rational Bubbles and Unemployment in an Old-Keynesian Model"

Roger Farmer:

Animal Spirits, Rational Bubbles and Unemployment in an Old-Keynesian Model, by Roger Farmer, NBER Working Paper No. 17137, June 2011 [open link?]: Abstract This paper presents a model of the macroeconomy in which any unemployment rate may be a steady-state equilibrium and every equilibrium unemployment rate is associated with a different value for the price of assets. To select an equilibrium, I construct a theory in which asset price bubbles are caused by the self-fulfilling animal spirits of market participants, selected by a belief function. In contrast to my earlier work on this topic, asset prices may be unbounded. All of the actors in my model have rational expectations and the asset price bubbles that occur are individually rational, even though the equilibria of the model are socially inefficient. My work opens the door for a new class of theories in which market psychology, captured by the belief function, plays an independent role in helping us to understand economic crises.

Monday, August 01, 2011

"What Should Be Done About the Private Money Market?"

Morgan Ricks argues that we are still vulnerable to runs on the shadow banking system, and that the "moneyness" of assets traded in these markets will require the use of regulatory approaches similar to those used to stabilize the traditional banking system:

What Should Be Done About the Private Money Market?, by by Morgan Ricks: What should be done about the private money market? It is widely recognized that this market was at the center of the recent financial crisis. Indeed, very nearly the entire emergency response to the financial crisis was aimed at stabilizing this market. Yet recent and proposed reform measures have done little to address this market squarely.
It is important to be precise about terminology. The term “private money market” refers to the multi-trillion dollar market for short-term IOUs that are neither issued by nor guaranteed by the federal government. This market includes repurchase agreements (“repo”), asset-backed commercial paper (“ABCP”), uninsured deposit obligations, and so-called Eurodollar obligations of foreign banks. It also includes the “shares” of money market mutual funds. ...
The recent crisis witnessed a massive run on the private money market (also called the “shadow banking system”). And the federal government responded with a massive intervention. But why intervene? What would have been so bad about widespread defaults by issuers of these instruments? In my recent article, Regulating Money Creation After the Crisis, published in the Harvard Business Law Review, I provide one possible answer. Specifically, I argue that the instruments of the private money market have important properties of money. Accordingly, widespread defaults on these instruments should be expected to generate adverse monetary consequences.
This argument echoes Milton Friedman’s and Anna Schwartz’s influential argument about the causes of the Great Depression. In their monumental Monetary History of the United States, they traced the origins of the Great Depression to a massive monetary contraction brought about by the collapse of the banking system. ...
Does the Friedman-Schwartz logic apply to the private money market? The argument is admittedly somewhat counterintuitive. Unlike bank demand deposits, most private money market instruments do not function as a “medium of exchange”—the sine qua non of “money.” Nevertheless, the article offers both theoretical support and empirical evidence for the “moneyness” of money market instruments. It also shows that money market instruments are in fact treated like demand deposit obligations—and differently from ordinary debt instruments—in a variety of legal, accounting, and market contexts. In other words, these instruments are widely acknowledged as having money-like attributes, in a way that ordinary (capital market) debt instruments are not.
This line of reasoning poses a problem for traditional financial regulation. Suppose for the moment that money market instruments do indeed serve an important monetary function. ... Suppose also that defaults on these instruments, like defaults on deposits, amount to a contraction in the money supply, with the attendant macroeconomic consequences that Friedman and Schwartz identified. If these consequences provide a sound economic justification for the extraordinary regulation of depository banks—not to mention the special support facilities to which depository banks have access—does that rationale not apply with equal force to issuers of private money market instruments? In other words, does our special regulatory system for depository firms rest on an arbitrary and formalistic distinction?
My paper argues that it does. More generally, it finds reasons to favor establishing money creation as a sovereign responsibility by means of a public-private partnership system—in effect, recognizing money creation as a public good. (This is just what modern bank regulation has done for decades.) Logically, this approach would entail disallowing access to money market financing by firms not meeting the applicable regulatory criteria—just as firms not licensed as banks are legally prohibited from issuing deposit liabilities.
Against this backdrop, the article reviews the Dodd-Frank Act’s approach to regulating money creation. It finds reasons to doubt that the new law will be conducive to stable conditions in the money market.
The full paper is available for download on the Harvard Business Law Review website here.

Wednesday, July 06, 2011

Are Working Papers Working?

Should we abandon working papers?:

Working Papers are NOT Working, by Berk Özler: ...It is common practice in economics to publish working papers. There are formal working paper series such as NBER, BREAD, IZA, World Bank Policy Research Working Paper Series, etc. With the proliferation of the internet, however, people don’t even need to use these formal working paper series. You can simply post your brand new paper on your website and violà, you have a working paper: put that into your CV! Journals are giving up double-blind refereeing (AEJ is the latest) because it is too easy to use search engines to find the working paper version (it’s not at all clear that this is good...). But, do the benefits of making these findings public before peer-review outweigh the costs? I recently became very unsure…
In economics, publication lags, even for journals that are fast, can be long: it is not uncommon to see articles that state: Submitted December 2007; accepted August 2010. ... But, research findings are public goods and working papers are a way to get this information out to parties who can benefit from the new information while the paper is under review.
But, that assumes that the findings are ready for public consumption at this preliminary stage. By preliminary, I mean papers that have not yet been seriously reviewed by anyone familiar with the methods and the specific topic. Findings, and particularly interpretations, change between the working paper phase and the published version of a paper: if they didn’t, then we would not need peer-reviewed journals. Sometimes, they change dramatically. ...
When a new working paper comes out, especially one that might be awaited (like the first randomized experiment on microfinance), people rush to read it (or, rather, skim it). It gets downloaded many times, gets blogged about, etc. Then, a year later a new version comes out (maybe it is even the published version). Many iterations of papers simply improve on the original premise, provide more robustness checks, etc.. But, interpretations often change; results get qualified; important heterogeneity of impacts is reported. And sometimes, main findings do change. What happens then?
People are busy. Most of them had only read the abstract (and maybe the concluding section) of the first draft working paper to begin with. ... The newer version, other than for a few dedicated followers of the topic or the author, will not be read by many. They will cling to their beliefs based on the first draft: first impressions matter. ...
There is another problem: people who are invested in a particular finding will find it easier to take away a message that confirms their prior beliefs from a working paper. They will happily accept the preliminary findings of the working paper and go on to cite it for a long time (believe me, well past the updated versions of the working paper and even the eventual journal publication). People who don’t buy the findings will also find it easy to dismiss them: the results are not peer-reviewed. At least, the peer-review process brings a degree of credibility to the whole process and makes it harder for people to summarily dismiss findings they don’t want to believe.
I have some firsthand experience with this, as my co-authors and I have a working paper, the findings of which changed significantly over time. In March 2010, we put out a working paper on the role of conditionalities in cash transfer programs, which we also simultaneously submitted to a journal. The paper was reporting one-year effects of an intervention using self-reported data on school participation. ...
What’s the problem? Our findings in the March 2010 version suggested that CCTs that had regular school attendance as a requirement to receive cash transfers did NOT improve school enrollment over and above cash transfers with no strings attached. Our findings in the December 2010 version DID. ...
However, the earlier (and erroneous) finding that conditions did not improve schooling outcomes was news enough that it stuck. Many people, including good researchers, colleagues at the Bank, bloggers, policymakers, think that UCTs are as effective as CCTs in reducing dropout rates – at least in Malawi. And, this is with good reason: it was US who screwed up NOT them! Earlier this year, I had a magazine writer contact me to ask whether there was a new version of the paper because her editor uncovered the updated findings while she was fact-checking the story before clearing it for publication. As recently as yesterday, comments on Duncan Green’s blog suggested that his readers, relying on his earlier blogs and other blogs, are not aware of the more recent findings. Even my research director was misinformed about our findings until he had to cite them in one of his papers and popped into my office.
Many working papers will escape this fate – which is definitely not the norm. But, no one can tell me that working papers don’t improve and change over time as the authors are pushed by reviewers who are doing their best to be skeptical and provide constructive criticism. But, it turns out that those efforts are mainly for the academic crowd or for the few diligent policymakers who are discerning users of evidence. ...
So, what if we chose to not have working papers? There is no doubt that the speed with which journals publish submitted papers would have to change. ...
If we didn't have working papers, we could also go back to double blind reviews again. No, it won’t be perfect, but double-blind was there for a reason. I see serious equity concerns with single blind reviews ...

Double blind has its problems as well. If you are active at NBER meetings, at a top university, etc., etc., and you get a paper to referee that you haven't already seen presented somewhere (or at least reviewed as as submission to a conference), often more than once, you will draw conclusions. But single blind removes all doubt, so it's no better on this score.

On the issue of working papers, sensational results -- the ones most likely to be costly if they change later -- are going to leak in their preliminary form, and they are going to be reported. When that happens, I'd rather that the experts in the field be aware of the paper already or have easy access to it so that they can qualify the results as needed, or at least try to. Without the checks and balances of other researchers to help reporters and policymakers with the interpretation of the results, etc., this could lead to even worse policy errors than before. More generally, I'm not convinced that the costs -- the times when economics working papers have caused changes in policy that are later regretted -- exceed the benefits to other researchers of having this information available sooner rather than later (if only, for example, to know what questions other people are working on, new techniques that are being used, and so on -- the results themselves are not the only way this information is helpful).

Sunday, May 22, 2011

"Using Blackouts to Help Understand the Determinants of Infant Health"

This is based on the work of a new colleague, Alfredo Burlando:

What happens when the power goes out? Using blackouts to help understand the determinants of infant health, by Jed Friedman: Low birth weight, usually defined as less than 2500 grams at birth, is an important determinant of infant mortality. It is also significantly associated with adverse outcomes well into adulthood such as reduced school attainment and lower earnings. Maternal nutrition is a key determinant of low birth weight...

But what about the down-side risk of temporary income fluctuations - do short-lived negative income shocks have equally significant effects on low birth weight? Households may be able to prioritize the consumption and care of pregnant mothers during adverse shocks, but of course households must know about the pregnancy in the first place. This knowledge doesn’t usually manifest until after the first 6-8 weeks of pregnancy and those initial weeks of pregnancy are also critical ones to ensure the health of the fetus. One recent study by Alfredo Burlando focuses on this critical window of time when households do not yet have sufficient knowledge and thus do not sufficiently protect against the changing economic circumstances.

In May of 2008, the undersea cable that brings power to the Tanzanian island of Zanzibar was ruptured, plunging the island into a blackout that lasted 4 weeks. As a result, households employed in sectors such as manufacturing or tourism that relied on electricity experienced income declines while households in more traditional sectors such as farming did not suffer noticeable shortfalls. Fortunately any income decline was short-lived – the power was only out for 4 weeks – and in a matter of months income in all affected sectors had recovered to previous levels. Despite the brief duration of this income shock, could there have been any long-lasting consequences?

Well it turns out that infants born 7 to 9 months after the blackout were significantly smaller – an average of 75 grams smaller – than infants born within 6 months of the start of blackout or beyond 9 months after its end. This reduction translates into an 11% increase in the probability of a low weight birth. Burlando proposes reduced nutritional intake and heightened maternal stress, brought on by the blackout induced income shock, as the main transmission mechanism for lower birth weights. ...

The findings suggest that women who were known to be pregnant at the time of the black out, i.e. those who were visibly pregnant, received insurance from the shock where as women who did not realize they were yet pregnant (or who had conceived during the blackout) did not receive the same protection.

For me, the take away messages from this study are threefold:

  • These findings highlight the importance of behavioral responses and that people in the face of a crisis can be resilient when they are armed with relevant knowledge – households with women who knew they were pregnant apparently prioritized maternal nutrition. It also underscores the obvious point that any protective program that targets pregnant women faces the challenge of improving the informational barriers that prevent early pregnancy awareness.
  • The study also highlights the long-lasting effects of even very brief income shocks if (a) they occur at critical moments in fetal development and (b) households cannot fully smooth consumption or otherwise insure themselves from temporary declines. ...

Friday, May 20, 2011

"There is Something Very Wrong with This Picture"

I've been meaning to highlight this paper by Levy and Kochan, and still hope to do a bit more with it, but for now here's Dani Rodrik:

There is something very wrong with this picture, by Dani Rodrik: This graph is from a new paper by Frank Levy and Tom Kochan, showing trends in labor productivity and compensation since 1980:

Labor productivity increased by 78 percent between 1980 and 2009, but the median compensation (including fringe benefits) of 35-44 year-old males with high school (and no college) education declined by 10 percent in real terms.
Women have done in general better, but two-thirds of women still have seen their pay lag behind productivity.
Levy and Kochan call for a Social Compact to reverse these trends, and outline some of the steps necessary to get there. The paper is very well worth reading.

Policymakers need to focus on job creation much more than they are, but as this graph shows creating more jobs is only part of the solution to the problems that middle and lower class households have been experiencing. We also need to ensure that income is equitably shared, and the paper outlines the steps needed to move in this direction:

The broken link between productivity and wage growth reflects changes in markets, policies, and their enforcement, institutions, and organizational norms and practices that have been evolving for a long time (circa 1980). Given this history, it is clear that the solutions will also need to be multiple and systemic and sustained for a long time. They also will need to match the features of the contemporary economy. The prior Social Compact was well-suited to a production-based economy in which wage increases in manufacturing set the norm for other parts of the economy.

Today, manufacturing can no longer play this catalytic role. Instead, norms and institutions need to support an innovation-knowledge based economy. We outline below a potential combination of actions suited to this task. If the list seems formidable, recall that we are now facing a situation where the economy has stopped working for something between one-half and two-thirds of all American workers.

Many of us have been calling for a New New Deal. I've done so many times over the last several years and I'm far from alone. Unfortunately, there's very little evidence that this is anywhere near the top of the political agenda. So long as those with wealth and power get theirs (and keep filling campaign coffers), it's hard to see that changing.

Tuesday, March 29, 2011

Farmer on Williamson on Farmer and Kocherlakota

I asked Roger Farmer if he'd like to respond to a recent post from Stephen Williamson: (it will be helpful to read Williamson's post first):

Farmer on Williamson on Farmer and Kocherlakota: Thanks to Stephen Williamson for publicizing my work and to Mark Thoma for providing a link and invitation to respond. Stephen: in addition to the paper you cited, I just finished an empirical paper on how to explain data without the Phillips Curve, two theoretical papers on why fiscal policy works in the short run (but shouldn’t be used) two papers on rational expectations with Markov switching and a piece on stochastic overlapping generations models.
The papers you mention in your blog, by Narayana and me, were both presented at a conference in Marseilles last week with not one but two Fed Presidents in attendance: Jim Bullard also gave a paper. Jim presented work that draws on the Benhabib-Schmitt-Grohé-Uribe paper on the perils of Taylor Rules. He sees a real danger of a Japan style deflation trap happening in the U.S.. Narayana gave a paper that combines a liquidity trap model of bubbles in an overlapping generations framework with a labor market based on the idea from my 2010 book, Expectations Employment and Prices. This book provides a new paradigm that drops the wage bargaining equation from a labor contracting model and replaces it with the assumption that employment is demand determined. This is the same assumption taken up by Narayana in the paper he presented in Marseilles.
The main idea is explained very nicely by one of the anonymous commentators on Stephen’s blog , who said:
“Think of it this way. With a centralized labor market, the real wage is pinned down by the intersection of labor demand and supply. With search, the labor market need not clear: the labor supply FOC is missing, and we need to add something else to close the model. One thing to add is an explicit bargaining model that effectively pins down the wage. An alternative is to say that output is demand-determined, and that the wage is the marginal product of labor at the demand determined level of output. Then firms are on their labor demand curve, but workers are not on their labor supply curve (but the beauty of search - unemployed workers will take a job at any positive wage).”
That’s exactly right. And once there are many possible labor market equilibria, there is room to close the model by bringing back the role of market psychology. That’s what I do in my work which has room for both involuntary unemployment and animal spirits; the two cornerstones of Keynes’ General Theory that are missing from the macroeconomics that emerged from Samuelson’s interpretation of Keynes.
Stephen professes not to understand the language of aggregate demand and supply. That’s not surprising given how many different ways it’s used. My own preferred interpretation is explained in a piece I wrote for the International Journal of Economic Theory in 2008.
The idea of aggregate demand and supply makes just as much sense as the notion of a microeconomic demand and supply curve as long as one works within a framework where the variables that shift one of the curves do not simultaneously shift the other. That is clearly not true in post-Lucas rational expectations models which is why the language went out of fashion. It is true in my work.

Monday, March 21, 2011

Imagining a Rejection

This is from Tiago Mata at History of Economics Playground. I don't think he likes Robert Shiller's paper on "Economists as Worldly Philosophers," nor the intrusion on historian's turf:

Bad job, by Tiago: Imagine I write a paper on Behavioral Macroeconomics making off the cuff observations about the latest financial products and how my bank manager frames that information, and noting my friends and neighbors’ flight to safety or to risk on the flimsiest of whims. Imagine I make no reference to secondary literature, or to methodology as I approach the questions.
Were I then to submit this piece to general appreciation, say get Robert Shiller to referee it. How do you think he would assess my effort?
I am sure we would be fast and dirty in telling me to do something else with my time.
I have not written a paper on Behavioral Macroeconomics and have no intention of doing so. But Shiller has written a working paper, kind of on the history of economics (Cowles Foundation Discussion Paper No. 1788 – Economists as Worldly Philosophers). There is no thread to the argument, no understanding of context, and zero references to the vast body of work by historians on his subject. The working paper, I am sure, will get plenty of readers, downloads and comments. But were I ever to referee it, I would be fast and dirty in telling him to do something else with his time.

Thursday, March 17, 2011

Mankiw and Weinzierl: An Exploration of Optimal Stabilization Policy

I haven't had a chance to ready beyond the introduction and conclusion of this paper by Greg Mankiw and Matthew Weinzierl, "An Exploration of Optimal Stabilization Policy," but a couple of quick reactions. First, in the paper, in order for there to be a case for fiscal policy at all, the economy must be at the zero bound and the monetary authority must be "unable to commit itself to expansionary future policy." This point about commitment has been made in other papers (I believe Eggertsson, for example, notes this), and I think the credibility of future promises to create inflation is a problem. If so, if the Fed cannot credibly commit to future inflationary policy, then this paper provides a basis for, not against, fiscal policy when the economy is stuck at the zero bound.

Second, they note in the paper that tax policy can do a better job of replicating the flexible price equilibrium in terms of the allocation of resources, and hence tax policy should be used instead of government spending. However, since I think that there is a strong case that we are short on infrastructure, and that public goods problems prevent the private sector from providing optimal quantities of these goods on its own, I don't see the distributional issues as an important objection to government spending at present.

Here's the introduction to the paper:

An Exploration of Optimal Stabilization Policy, by N. Gregory Mankiw and Matthew Weinzierl March 8, 20111 Introduction What is the optimal response of monetary and fiscal policy to an economy-wide decline in wealth and aggregate demand? This question has been at the forefront of many economists' minds over the past several years. In the aftermath of the 2008-2009 housing bust, financial crisis, and stock market decline, people were feeling poorer than they did a few years earlier and, as a result, were less eager to spend. The decline in the aggregate demand for goods and services led to the most severe recession in a generation or more.
The textbook answer to such a situation is for policymakers to use the tools of monetary and fiscal policy to prop up aggregate demand. And, indeed, during this recent episode, the Federal Reserve reduced the federal funds rate -- its primary policy instrument -- almost all the way to zero. With monetary policy having used up its ammunition of interest rate cuts, economists and policymakers increasingly looked elsewhere for a solution. In particular, they focused on fiscal policy and unconventional instruments of monetary policy.
To traditional Keynesians, the solution is startlingly simple: The government should increase its spending to make up for the shortfall in private spending. Indeed, this was a main motivation for the $800 billion stimulus package proposed by President Obama and passed by Congress in early 2009. The logic behind this policy should be familiar to anyone who has taken a macroeconomics principles course anytime over the past half century.
Yet many Americans (including quite a few congressional Republicans) are skeptical that increased government spending is the right policy response. They are motivated by some basic economic and political questions: If we as individual citizens are feeling poorer and cutting back on our spending, why should our elected representatives in effect reverse these private decisions by increasing spending and going into debt on our behalf? If the goal of government is to express the collective will of the citizenry, shouldn't it follow the lead of those it represents by tightening its own belt?
Traditional Keynesians have a standard answer to this line of thinking. According to the paradox of thrift, increased saving may be individually rational but collectively irrational. As individuals try to save more, they depress aggregate demand and thus national income. In the end, saving might not increase at all. Increased thrift might lead only to depressed economic activity, a malady that can be remedied by an increase in government purchases of goods and services.
The goal of this paper is to address this set of issues in light of modern macroeconomic theory. Unlike traditional Keynesian analysis of fiscal policy, modern macro theory begins with the preferences and constraints facing households and …firms and builds from there. This feature of modern theory is not a mere fetish for microeconomic foundations. Instead, it allows policy prescriptions to be founded on the basic principles of welfare economics. This feature seems particularly important for the case at hand, because the Keynesian recommendation is to have the government undo the actions that private citizens are taking on their own behalf. Figuring out whether such a policy can improve the well-being of those citizens is the key issue, a task that seems impossible to address without some reliable measure of welfare.

Continue reading "Mankiw and Weinzierl: An Exploration of Optimal Stabilization Policy" »

Monday, March 14, 2011

"The Internet and Local Wages: A Puzzle"

This is from a description of new research forthcoming in the American Economic Review, “The Internet and Local Wages: A Puzzle,” by Avi Goldfarb, Chris Forman and Shane Greenstein:

What has the Internet Done for the Economy?, Kellogg Insight: ...There is widespread optimism among media commentators and policy makers that the Internet erases geographic and socioeconomic boundaries. The Death of Distance and The World Is Flat, two books that espouse that rosy view, were bestsellers. But in the early days of the Internet, the income gap between the upper and middle classes actually began to grow. “We thought it was just a very natural question to ask: is the Internet responsible?” Greenstein says.
Misplaced Optimism
The researchers studied trends from 1995 to 2000 in several large sets of data, including the Quarterly Census of Employment and Wages—which gives county-level information on average weekly wages and employment—and the Harte Hanks Market Intelligence Computer Intelligence Technology Database, which holds survey information about how firms use the Internet. In total, the researchers included relevant data for nearly 87,000 private companies with more than 100 employees each. Based on their older work, they focused only on advanced Internet technologies.
Out of about 3,000 counties in the U.S., in only 163 did business adoption of Internet technologies correlate with wage and employment growth, the study found. All of these counties had populations above 150,000 and were in the top quarter of income and education levels before 1995. Between 1995 and 2000, they showed a 28 percent average increase in wages, compared with a 20 percent increase in other counties (Figure 1).

Figure 1. Advanced Internet investment and wage growth by county type.

Why did the Internet make such big waves in these few areas? Greenstein believes the reason was that these areas already had sophisticated companies and the communications infrastructure needed to seize on the Internet’s opportunities. But there are other possibilities. The impact could have been due to a well-known phenomenon called “biased technical change,” which means that new technologies can thrive only in places with skilled workers who know how to use them. Or it could have been because cities brought certain advantages—denser labor markets, better communication, tougher competition—than more remote areas.
“Each one of those explanations is plausible in our data, and probably explains a piece of it. But none of them by themselves can explain the whole story,” Greenstein says. “It’s really a puzzle.” ...

Wednesday, March 09, 2011

Inequality and the Distribution of Human Capital

Does skill-biased technological change explain inequality?:

The Wage Premium Puzzle and the Quality of Human Capital, by Milton H. Marquis, Bharat Trehany, and Wuttipan Tantivongz: Abstract The wage premium for high-skilled workers in the United States, measured as the ratio of the 90th-to-10th percentiles from the wage distribution, increased by 20 percent from the 1970s to the late 1980s. A large literature has emerged to explain this phenomenon. A leading explanation is that skill-biased technological change (SBTC) increased the demand for skilled labor relative to unskilled labor. In a calibrated vintage capital model with heterogenous labor, this paper examines whether SBTC is likely to have been a major factor in driving up the wage premium. Our results suggest that the contribution of SBTC is very small, accounting for about 1/20th of the observed increase. By contrast, a gradual and very modest shift in the distribution of human capital across workers can easily account for the large observed increase in wage inequality.

And what might explain the change in the distribution of human capital? From the conclusions:

factors that alter the skill distribution of the workforce appear to be a promising avenue of future research, since relatively small changes in the skill distribution can have large effects on the wage premia. Such factors could include immigration, population growth, or deficiencies in the educational system in failing to provide job-relevant training. At the high end of the skill distribution, endogenous increases in human capital may be taking place in locations such as Silicon valley.

Wednesday, February 09, 2011

"The Recent Evolution of the Natural Rate of Unemployment"

Research from Mary Daly, Bart Hobijn, and Rob Valletta of the SF Fed says that the increase in the structural rate of unemployment is relatively small, and it is expected to be transitory: (they estimate that only "about 0.5 percentage points or less" of the increase in unemployment is persistent):

Abstract The U.S. economy is recovering from the financial crisis and ensuing deep recession, but the unemployment rate has remained stubbornly high. Some have argued that the persistent elevation of unemployment relative to historical norms reflects the fact that the shocks that hit the economy were especially disruptive to labor markets and likely to have long lasting effects. If such structural factors are at work they would result in a higher underlying natural or nonaccelerating inflation rate of unemployment, implying that conventional monetary and fiscal policy should not be used in an attempt to return unemployment to its pre-recession levels. We investigate the hypothesis that the natural rate of unemployment has increased since the recession began, and if so, whether the underlying causes are transitory or persistent. We begin by reviewing a standard search and matching model of unemployment, which shows that two curves—the Beveridge curve (BC) and the Job Creation curve (JCC)—determine equilibrium unemployment. Using this framework, our joint theoretical and empirical exercise suggests that the natural rate of unemployment has in fact risen over the past several years, by an amount ranging from 0.6 to 1.9 percentage points. This increase implies a current natural rate in the range of 5.6 to 6.9 percent, with our preferred estimate at 6.25 percent. After examining evidence regarding the effects of labor market mismatch, extended unemployment benefits, and productivity growth, we conclude that only a small fraction of the recent increase in the natural rate is likely to persist beyond a five-year forecast horizon.

Saturday, January 08, 2011

"Have We Underestimated the Likelihood and Severity of Zero Lower Bound Events?"

An interesting new paper from Hess Chung, Jean-Philippe Laforte, and David Reifschneider of the Board of Governors, and John Williams of the SF Fed:

Have We Underestimated the Likelihood and Severity of Zero Lower Bound Events?, by Hess Chung, Jean-Philippe Laforte, David Reifschneider, and John C. Williams, January 7, 2011: Abstract Before the recent recession, the consensus among researchers was that the zero lower bound (ZLB) probably would not pose a significant problem for monetary policy as long as a central bank aimed for an inflation rate of about 2 percent; some have even argued that an appreciably lower target inflation rate would pose no problems. This paper reexamines this consensus in the wake of the financial crisis, which has seen policy rates at their effective lower bound for more than two years in the United States and Japan and near zero in many other countries. We conduct our analysis using a set of structural and time series statistical models. We find that the decline in economic activity and interest rates in the United States has generally been well outside forecast confidence bands of many empirical macroeconomic models. In contrast, the decline in inflation has been less surprising. We identify a number of factors that help to account for the degree to which models were surprised by recent events. First, uncertainty about model parameters and latent variables, which were typically ignored in past research, significantly increases the probability of hitting the ZLB. Second, models that are based primarily on the Great Moderation period severely understate the incidence and severity of ZLB events. Third, the propagation mechanisms and shocks embedded in standard DSGE models appear to be insufficient to generate sustained periods of policy being stuck at the ZLB, such as we now observe. We conclude that past estimates of the incidence and effects of the ZLB were too low and suggest a need for a general reexamination of the empirical adequacy of standard models. In addition to this statistical analysis, we show that the ZLB probably had a first-order impact on macroeconomic outcomes in the United States. Finally, we analyze the use of asset purchases as an alternative monetary policy tool when short-term interest rates are constrained by the ZLB, and find that the Federal Reserve’s asset purchases have been effective at mitigating the economic costs of the ZLB. In particular, model simulations indicate that the past and projected expansion of the Federal Reserve's securities holdings since late 2008 will lower the unemployment rate, relative to what it would have been absent the purchases, by 1½ percentage points by 2012. In addition, we find that the asset purchases have probably prevented the U.S. economy from falling into deflation.

And, from the conclusions:

Continue reading ""Have We Underestimated the Likelihood and Severity of Zero Lower Bound Events?"" »

Monday, December 20, 2010

Sumner's Reply on Nominal GDP Targeting

Here's a response to my request for more discussion of the merits of nominal GDP targeting (in both levels and growth rates) relative to a Taylor rule:

Reply to Thoma on NGDP targeting, by Scott Sumner: Mark Thoma recently asked the following question:

So, for those of you who are advocates of nominal GDP targeting and have studied nominal GDP targeting in depth, (a) what important results concerning nominal GDP targeting have I left out or gotten wrong? (b) Why should I prefer one rule over the other? In particular, for proponents of nominal GDP targeting, what are the main arguments for this approach? Why is targeting nominal GDP better than a Taylor rule?

...Thoma raises issues that I don’t feel qualified to discuss, such as learnability.  My intuition says that’s not a big problem, but no one should take my intuition seriously.  What people should take seriously is Bennett McCallum’s intuition (in my view the best in the business), and he also thinks it’s an overrated problem.  I think the main advantage of NGDP targeting over the Taylor rule is simplicity, which makes it more politically appealing.  I’m not sure Congress would go along with a complicated formula for monetary policy that looks like it was dreamed up by academics (i.e. the Taylor Rule.)  In practice, the two targets would be close, as Thoma suggested elsewhere in the post.

Instead I’d like to focus on a passage that Thoma links to, which was written by Bernanke and Mishkin in 1997 ...[continue reading]...

Just one quick note. I'm not sure I agree that McCallum thinks learnability is an "overrated problem." For example, he cites it as an important factor in arguing against using determinacy as a "selection criterion for rational expectations models":

Another Weakness of “Determinacy” as a Selection Criterion for Rational Expectations Models, by Seonghoon Cho and Bennett T. McCallum: ...It is well-known that dynamic linear rational expectations (RE) models often have multiple solutions... It is also well-known that much of the literature, especially in monetary economics, approaches issues concerning such multiplicities by establishing whether a solution is, or is not, “determinate” in the sense of being the only solution that is dynamically stable. Often, cases featuring “indeterminacy,” defined as the existence of more than one stable solution, are regarded as problematic and to be avoided (by means of policy) if possible.[1] On the other hand, several authors, including Bullard (2006), Cho and Moreno (2008), Evans and Honkapohja (2001), and McCallum (2003, 2007) have— implicitly, in some cases—questioned this practice on various grounds. For example, determinate solutions may not be learnable (Bullard (2006), Bullard and Mitra (2002)) whereas cases with indeterminacy may possess only one “plausible” solution (McCallum (2003, 2007)). In the present paper we present another argument against the use of determinacy as a guide ... to interpretation of outcomes implied by a RE model.

Or, probably better, see his rejoinder to Cochrane's "Can Learnability save New-Keynesian models?," one of many papers he has written on this topic, and see if you conclude that McCallum thinks learnability is an unimportant issue.

Sunday, December 19, 2010

"Sunshine: at the IMF, of all Places"

A new paper argues that the best solution to a financial crisis like the one we just experienced is to increase the share of income going to labor:

Sunshine: at the IMF, of all places, by Alex Harrowell, A Fist Full of Euros: So, here we are, after a 2010 of economic horrors. There is extensive debate as to whether the standard tools of economics are even valid... But is anyone at least trying to do something original with the standard toolkit? The DSGE model may be one of John Quiggin’s zombies..., but zombies are notoriously resilient. ...

The answer on this occasion is yes, at least as far as Michael Kumhof and Romain Ranciére, go. In a new paper, they present a DSGE model... Then, they run a simulation of the macro-economy assuming that there is a negative shock to the bargaining power of labor resulting in a shift in the income distribution.

The simulation results were that the financial sector balloons in size, that total private debt in the economy expands hugely, and that credit acts as a substitute for rising average wages in the short run. Eventually, the model produced a massive financial crisis and a brutal recession, followed by a blow-out of the government budget.

Your keen and agile minds will not have missed that flat real wages, an increased share of national income going to the top 5%, enormous growth in the financial sector, and a credit-financed consumer boom are exactly what happened to the macroeconomy in the last 30 years. ...

So, what should we do about it? Kumhof and Ranciére have something to say about that as well. ... They considered a scenario in which the government took the pain, accepting a large government deficit in order to minimize the impact of the crisis on the real economy. This had the advantage of reducing the fall in GDP, and therefore allowing growth to reduce households’ leverage. They also considered the option of just suffering, which actually increased leverage as incomes fell and the stock of debt remained.

Then they considered two more positive responses to the crisis. One was a debt restructuring, or to be brutal about it, widespread default and bankruptcy. This had the advantage that it does, indeed, reduce the leverage burden and does so cheaply. It also implies the end of the big banks...

The other was to increase labor's share of income. They found that this achieved a faster, bigger, and more lasting reduction in leverage and a reduced probability of crises. In their own words:

...For long-run sustainability a permanent flow adjustment, giving workers the means to repay their obligations over time, is therefore much more successful... But without the prospect of a recovery in the incomes of poor and middle income households over a reasonable time horizon, the inevitable result is that loans keep growing, and therefore so does leverage and the probability of a major crisis that, in the real world, typically also has severe implications for the real economy.

They also argue that the inequality-finance-lending transmission mechanism might also explain the global imbalances... However, they haven’t extended the model to include the international dimension yet, although it’s on their agenda for further research.

I’ve waited for this moment, 752 words on, to mention the key detail: this cell of dangerous subversive Bolsheviks is embedded in the International Monetary Fund, and their poisonous hate-writings were published as an IMF Working Paper. Perhaps DSK really has had an influence on the institution? ...

Wednesday, November 24, 2010

"Effects of the Financial Crisis and Great Recession on American Households"

The conclusion to this paper by Michael Hurd and Susann Rohwedder is not very encouraging:

Effects of the Financial Crisis and Great Recession on American Households, by Michael D. Hurd and Susann Rohwedder, NBER [open link]: Introduction ...In this paper we present results about the effects of the economic crisis and recession on American households. They come from high-frequency surveys dedicated to tracking the effects of the crisis and recession that we conducted in the American Life Panel – an Internet survey run by RAND Labor and Population. The first survey was fielded at the beginning of November 2008, immediately following the large declines in the stock market of September and October. The next survey followed three months later in February 2009. Since May 2009 we have collected monthly data on the same households. ...
Conclusions The economic problems leading to the recession began with a housing price bubble in many parts of the country and a coincident stock market bubble. These problems evolved into the financial crisis. ...
According to our measures almost 40% of households have been affected either by unemployment, negative home equity, arrears on their mortgage payments, or foreclosure. Additionally economic preparation for retirement, which is hard to measure, has undoubtedly been affected. Many people approaching retirement suffered substantial losses in their retirement accounts: indeed in the November 2008 survey, 25% of respondents aged 50-59 reported they had lost more than 35% of their retirement savings, and some of them locked in their losses prior to the partial recovery in the stock market by selling out. Some persons retired unexpectedly early because of unemployment, leading to a reduction of economic resources in retirement which will be felt throughout their retirement years. Some younger workers who have suffered unemployment will not reach their expected level of lifetime earnings and will have reduced resources in retirement as well as during their working years.
Spending has been approximately constant since it reached its minimum in about November, 2009. Short-run expectations of stock market gains and housing prices gains have recovered somewhat, yet are still rather pessimistic; and, possibly more telling, longer-term expectations for those price increases have declined substantially and have shown no signs of recovery. The implication is that long-run expectations have become pessimistic relative to short-run expectations.
Expectations about unemployment have improved somewhat from their low point in May 2009 but they remain high: they predict that about 18% of workers will experience unemployment over a 12 month period. Despite the public discussion of the necessity to work longer, expectations about working to age 62 among those not currently working declined by 10 percentage points. In our view this decline reflects long-term pessimism about the likelihood of a successful job search.
The recession officially ended in June 2009. A main component of that judgment is that the economy is no longer declining. According to our data the economic situation of the typical household is no longer worsening which is consistent with the end of the recession defined as negative change. However, when defined in terms of levels rather than rates of change, from the point of view of the typical household the Great Recession is not over.

Monday, November 01, 2010

The Stagnation Regime of the New Keynesian Model and Current US Policy

My colleague George Evans has an interesting new paper. He shows that when there is downward wage rigidity, the "asymmetric adjustment costs" referenced below, the economy can get stuck in a zone of stagnation. Escaping from the stagnation trap requires a change in government spending or some other shock of sufficient size. If the change in government spending is large enough, the economy will return to full employment. But if the shock to government spending is below the required threshold (as the stimulus package may very well have been), the economy will remain trapped in the stagnation regime.

(I also highly recommend section 4 on policy implications, which I have included on the continuation page. It discusses fiscal policy options, quantitiative easing, how to help to state and local governments, and other policies that could help to get us out of the stagnation regime):

The Stagnation Regime of the New Keynesian Model and Current US Policy, by George Evans: 1 Introduction The economic experiences of 2008-10 have highlighted the issue of appropriate macroeconomic policy in deep recessions. A particular concern is what macroeconomic policies should be used when slow growth and high unemployment persist even after the monetary policy interest rate instrument has been at or close to the zero net interest rate lower bound for a sustained period of time. In Evans, Guse, and Honkapohja (2008) and Evans and Honkapohja (2010), using a New Keynesian model with learning, we argued that if the economy is subject to a large negative expectational shock, such as plausibly arose in response to the financial crisis of 2008-9, then it may be necessary, in order to return the economy to the targeted steady state, to supplement monetary policy with fiscal policy, in particular with temporary increases in government spending.
The importance of expectations in generating a “liquidity trap” at the zero-lower bound is now widely understood. For example, Benhabib, Schmitt-Grohe, and Uribe (2001b), Benhabib, Schmitt-Grohe, and Uribe (2001a) show the possibility of multiple equilibria under perfect foresight, with a continuum of paths to an unintended low or negative inflation steady state.[1] Recently, Bullard (2010) has argued that data from Japan and the US over 2002-2010 suggest that we should take seriously the possibility that “the US economy may become enmeshed in a Japanese-style deflationary outcome within the next several years.”
The learning approach provides a perspective on this issue that is quite different from the rational expectations results.[2] As shown in Evans, Guse, and Honkapohja (2008) and Evans and Honkapohja (2010), when expectations are formed using adaptive learning, the targeted steady state is locally stable under standard policy, but it is not globally stable. However, the potential problem is not convergence to the deflation steady state, but instead unstable trajectories. The danger is that sufficiently pessimistic expectations of future inflation, output and consumption can become self-reinforcing, leading to a deflationary process accompanied by declining inflation and output. These unstable paths arise when expectations are pessimistic enough to fall into what we call the “deflation trap.” Thus, while in Bullard (2010) the local stability results of the learning approach to expectations is characterized as one of the forms of denial of “the peril,” the learning perspective is actually more alarmist in that it takes seriously these divergent paths.
As we showed in Evans, Guse, and Honkapohja (2008), in this deflation trap region aggressive monetary policy, i.e. immediate reductions on interest rates to close to zero, will in some cases avoid the deflationary spiral and return the economy to the intended steady state. However, if the pessimistic expectation shock is too large then temporary increases in government spending may be needed. The policy response in the US, UK and Europe has to some extent followed the policies advocated in Evans, Guse, and Honkapohja (2008). Monetary policy has been quick, decisive and aggressive, with, for example, the US federal funds rate reduced to near zero levels by the end of 2008. In the US, in addition to a variety of less conventional interventions in the financial markets by the Treasury and the Federal Reserve, including the TARP measures in late 2008 and a large scale expansion of the Fed balance sheet designed to stabilize the banking system, there was the $727 billion ARRA stimulus package passed in February 2009.
While the US economy has stabilized, the recovery has to date been weak and the unemployment rate has been both very high and roughly constant for about one year. At the same time, although inflation is low, and hovering on the brink of deflation, we have not seen the economy recording large and increasing deflation rates.[3] From the viewpoint of Evans, Guse, and Honkapohja (2008), various interpretations of the data are possible, depending on one’s view of the severity of the initial negative expectations shock and the strength of the monetary and fiscal policy impacts. However, since recent US (and Japanese) data may also consistent with convergence to a deflation steady state, it is worth revisiting the issue of whether this outcome can in some circumstances arise under learning.
In this paper I develop a modification of the model of Evans, Guse, and Honkapohja (2008) that generates a new outcome under adaptive learning. Introducing asymmetric adjustment costs into the Rotemberg model of price setting leads to the possibility of convergence to a stagnation regime following a large pessimistic shock. In the stagnation regime, inflation is trapped at a low steady deflation level, consistent with zero net interest rates, and there is a continuum of consumption and output levels that may emerge. Thus, once again, the learning approach raises the alarm concerning the evolution of the economy when faced with a large shock, since the outcome may be persistently inefficiently low levels of output. This is in contrast to the rational expectations approach of Benhabib, Schmitt-Grohe, and Uribe (2001b), in which the deflation steady state has output levels that are not greatly different from the targeted steady state.
In the stagnation regime, fiscal policy, taking the form of temporary increases in government spending, is important as a policy tool. Increased government spending raises output, but leaves the economy within the stagnation regime until raised to the point at which a critical level of output is reached. Once output exceeds the critical level, the usual stabilizing mechanisms of the economy resume, pushing consumption, output and inflation back to the targeted steady state, and permitting a scaling back of government expenditure.

Here is the section on policy options recommended above (it is relatively non-technical):

Continue reading "The Stagnation Regime of the New Keynesian Model and Current US Policy" »