Category Archive for: Academic Papers [Return to Main]

Monday, October 07, 2013

'Uncertainty Shocks are Aggregate Demand Shocks'

More new research:

Uncertainty Shocks are Aggregate Demand Shocks, by Sylvain Leduc and Zheng Liu: Abstract We present empirical evidence and a theoretical argument that uncertainty shocks act like a negative aggregate demand shock, which raises unemployment and lowers inflation. We measure uncertainty using survey data from the United States and the United Kingdom. We estimate the macroeconomic effects of uncertainty shocks in a vector autoregression (VAR) model, exploiting the relative timing of the surveys and macroeconomic data releases for identification. Our estimation reveals that uncertainty shocks accounted for at least one percentage point increases in unemployment in the Great Recession and recovery, but did not contribute much to the 1981-82 recession. We present a DSGE model to show that, to understand the observed macroeconomic effects of uncertainty shocks, it is essential to have both labor search frictions and nominal rigidities.

New Research in Economics: The Return and Risk of Pursuing a BA

This is from Frank Levy at MIT:

I am attaching a paper co-authored with two former students that uses California higher ed data to make stylized calculations of the return and risk of pursuing a BA. The paper makes two main points.
Most studies of the rate of return to college use a best-case scenario in which students earn a degree with certainty in four years. More realistic calculations that account for students who take more than four years and students who drop out without a degree, etc. result in an average rate of return that is lower than it was in 2000 but still exceeds the interest rate on unsubsidized Stafford student loans – i.e. college remains a good investment by the normal criteria.
Most studies present an average rate of return without considering the investment’s risk. Over the last decade, rising tuition and deteriorating earnings for new college graduates (particularly at the bottom of the distribution) have increased the risk of pursuing a BA – e.g. the risk that a graduate at age 30 will have student loan payments that exceed 15% of their income. This growing risk is one explanation for increased  skepticism about the value of a college degree despite the apparently high rate of return. It also underlines the importance of students becoming aware of the government’s income contingent loan repayment plans.
The paper is posted on SSRN.

Friday, October 04, 2013

Minimum Wages and Job Growth: a Statistical Artifact

Arin Dube:

Minimum Wages and Job Growth: a Statistical Artifact, by Arin Dube: In a recent paper, Jonathan Meer and Jeremy West argue that it takes time for employment to adjust in response to a minimum wage hike, making it more difficult to detect an impact by looking at employment levels. In contrast, they argue, impact is easier to discern when considering employment growth. They find that a 10 percent increase in minimum wage is associated with as much as 0.5 percentage point lower aggregate employment growth. These estimates are very large, as John Schmitt explains in a recent post, and far outside the range in the existing literature. But are they right?
As I show in a new paper, the short answer is: no. The negative association between job growth and minimum wages is in the wrong place: it shows up in a sector like manufacturing that has few minimum wage workers, but is absent in low-wage sectors like food services and retail. In other words, it is likely a statistical artifact, and not a causal relationship...

Friday, September 27, 2013

Have Blog, Will Travel

I am here today:

Finance and the Wealth of Nations Workshop
Federal Reserve Bank of San Francisco
& The Institute of New Economic Thinking

9:00AM - 9:45AM: David Scharfstein and Robin Greenwood (Harvard Business School), “The Growth of Finance”, Discussant: Bradford DeLong (UC Berkeley)

9:45AM-10:30AM: Ariell Reshef (Virginia) and Thomas Philippon (NYU-Stern), “An International Look at the Growth of Modern Finance” , Discussant: Charles Jones (Stanford GSB)

10:45AM -11:30AM: Andrea Eisfeldt (UCLA-Anderson), Andrew Atkeson (UCLA) and Pierre-Olivier Weill (UCLA), “The Financial Soundness of U.S. Firms 1926–2011: Financial Frictions and the Business Cycle”, Discussant: Jonathan Rose (Federal Reserve Board) 

11:30AM -12:15PM: Ross Levine (UC Berkeley-Haas), Yona Rubenstein (LSE), Liberty for More: Finance and Educational Opportunities”, Discussant: Gregory Clark (UC Davis)

1:30PM-2:15PM: Atif Mian (Princeton), Amir Sufi (U. Chicago-Booth), “The Effect Of Interest Rate And Collateral Value Shocks On Household Spending: Evidence from Mortgage Refinancsing”, Discussant: Reuven Glick (SF Fed)

2:15PM-3:00PM: Maurice Obstfeld (UC Berkeley), “Finance at Center Stage: Some Lessons of the Euro Crisis”, Discussant: Giovanni dell'Ariccia (IMF)

3:00PM-3:45PM: Stephen G. Cecchetti and Enisse Kharroubi (BIS), “Why Does Financial Sector Growth Crowd Out Real Economic Growth?”, Discussant: Barry Eichengreen (UC Berkeley) 

4:00PM-4:45PM: Thorsten Beck (Tilburg), “Financial Innoation: The Bright and the Dark Sides”, Discussant: Sylvain Leduc (SF Fed) 

4:45PM-5:30PM: Alan M. Taylor (UC Davis), Òscar Jordà (SF Fed/UC Davis), Moritz Schularick (Bonn), “Sovereigns versus Banks: Crises, Causes and Consequences”, Discussant: Aaron Tornell (UCLA)

6:15PM: Keynote Speaker, Introduction: John Williams (SF Fed, President), Lord Adair Turner (INET, Senior Fellow; former Chairman of the UK Financial Services Authority), "Credit, Money and Leverage"

Thursday, September 12, 2013

New Research in Economics: Rational Bubbles

New research on rational bubbles from George Waters:

Dear Mark,

I’d like to take you up on your offer to publicize research. I’ve spent a good chunk of my time (along with Bill Parke) over the last decade developing an asset price model with heterogeneous expectations, where agents are allowed to adopt a forecast based on a rational bubble.

The idea of a rational bubble has been around for quite a while, but there has been little effort to explain how investors would coordinate on such a forecast when there is a perfectly good alternative forecast based on fundamentals. In our model agents are not assumed to use either forecast but are allowed to switch between forecasting strategies based on past performance, according to an evolutionary game theory dynamic.

The primary theoretical point is to provide conditions where agents coordinate on the fundamental forecast in accordance with the strong version of the efficient markets hypothesis. However, it is quite possible that agents do not always coordinate on the fundamental forecast, and there are periods of time when a significant fraction of agents adopt a bubble forecast. There are obvious implications about assuming a unique rational expectation.

A more practical goal is to model the endogenous formation and collapse of bubbles. Bubbles form when there is a fortuitous correlation between some extraneous information and the fundamentals, and agents are sufficiently aggressive about switching to better performing strategies. Bubbles always collapse due to the presence of a small fraction of agents who do not abandon fundamentals, and the presence of a reflective forecast, a weighted average of the other two forecasts, that is the rational forecast in the presence of heterogeneity.

There are strong empirical implications. The asset price is not forecastable, so the weak version of the efficient markets hypothesis is satisfied. Simulated data from the model shows excess persistence and variance in the asset price and ARCH effects and long memory in the returns.

There is much more work to be done to connect the approach to the literature on the empirical detection of bubbles, and to develop models with dynamic switching between heterogeneous strategies in more sophisticated macro models.

A theoretical examination of the model is forthcoming in Macroeconomic Dynamics.

A more user friendly exposition of the model and the empirical implications is here.

An older published paper (Journal of Economic Dynamics and Control 31(7)) focuses on ARCH effects and long memory.

Dr. George Waters
Associate Professor of Economics
Illinois State University
http://www.econ.ilstu.edu/gawater/

Sunday, September 01, 2013

'Limited Time Offer! Temporary Sales and Price Rigidities'

Are prices rigid? (For background and a discussion of previous evidence on price rigidity at both aggregated and disaggregated levels, see this post.) This is via Carola Binder:

Limited Time Offer! Temporary Sales and Price Rigidities: Even though prices change frequently, this does not necessarily mean that prices are very flexible, according to a new paper by Eric Anderson, Emi Nakamura, Duncan Simester, and Jón Steinsson. In "Informational Rigidities and the Stickiness of Temporary Sales," these authors note that it is important to distinguish temporary sales from regular price changes when analyzing the frequency of price adjustment and the response of prices to macroeconomic shocks.
"The literature on price rigidity can be divided into a literature on "sticky prices" and a literature on "sticky information" (which gives rise to sticky plans). A key question in interpreting the extremely high frequencies of price change observed in retail price data is whether these frequent price changes facilitate rapid responses to changing economic conditions, or whether some of these price changes are part of “sticky plans” that are determined substantially in advance and therefore not responsive to changing conditions. ...
They provide some interesting institutional features of temporary sales and promotions...
They conclude that regular (non-sale) prices exhibit stickiness, while temporary sale prices follow "sticky plans" that are relatively unresponsive in the short run to macroeconomic shocks:
"Our analysis suggests that regular prices are sticky prices that change infrequently but are responsive to macroeconomic shocks, such as the rapid run-up and decline of oil prices. In contrast, temporary sales follow sticky plans. These plans include price discounts of varying depth and frequency across products. But, the plans themselves are relatively unresponsive in the near term to macroeconomic shocks. We believe that this characterization of regular and sale prices as sticky prices versus sticky plans substantially advances an ongoing debate about the extent of retail price fluctuations and offers deeper insight into how retail prices adjust in response to macroeconomic shocks."

Monday, August 19, 2013

'Making Do With Less: Working Harder During Recessions'

New paper:

Making Do With Less: Working Harder During Recessions, by Edward P. Lazear, Kathryn L. Shaw, Christopher Stanton, NBER Working Paper No. 19328 Issued in August 2013: There are two obvious possibilities that can account for the rise in productivity during recent recessions. The first is that the decline in the workforce was not random, and that the average worker was of higher quality during the recession than in the preceding period. The second is that each worker produced more while holding worker quality constant. We call the second effect, “making do with less,” that is, getting more effort from fewer workers. Using data spanning June 2006 to May 2010 on individual worker productivity from a large firm, it is possible to measure the increase in productivity due to effort and sorting. For this firm, the second effect—that workers’ effort increases—dominates the first effect—that the composition of the workforce differs over the business cycle.

Friday, July 26, 2013

How Anti-Poverty Programs Go Viral

This is a summary of research by Esther Duflo, Abhijit Banerjee, Arun Chandrasekhar, and Matthew Jackson on the spread of information about government programs through social networks:

How anti-poverty programs go viral, by Peter Dizikes, MIT News Office: Anti-poverty researchers and policymakers often wrestle with a basic problem: How can they get people to participate in beneficial programs? Now a new empirical study co-authored by two MIT development economists shows how much more popular such programs can be when socially well-connected citizens are the first to know about them.
The economists developed a new measure of social influence that they call “diffusion centrality.” Examining the spread of microfinance programs in rural India, the researchers found that participation in the programs increases by about 11 percentage points when well-connected local residents are the first to gain access to them.
“According to our model, when someone with high diffusion centrality receives a piece of information, it will spread faster through the social network,” says Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation at MIT. “It could thus be a guide for an organization that tries to [place] a piece of information in a network.”
The researchers specifically wanted to study how knowledge about a program spreads by word of mouth, MIT professor Abhijit Banerjee says, because “while there was a body of elegant theory on the relation between what the network looks like and the speed of transmission of information, there was little empirical work on the subject.”
The paper, titled “The Diffusion of Microfinance,” is published today in the journal Science. ...
Microfinance is the term for small-scale lending, popularized in the 1990s, that can help relatively poor people in developing countries gain access to credit they would not otherwise have. The concept has been the subject of extensive political debate; academic researchers are still exploring its effects across a range of economic and geographic settings.
“Microfinance is the type of product which is very interesting to study,” Duflo says, “because in many cases it won’t be well known, and hence there is a role for information diffusion.” Moreover, she notes, “It is also the kind of product on which people could have strongly held … opinions.” So, she says, understanding the relationship between social structure and adoption could be particularly important.
Other scholars believe the findings are valuable. Lori Beaman, an economist at Northwestern University, says the paper “significantly moves forward our understanding of how social networks influence people’s decision-making,” and suggests that the work could spur other on-the-ground research projects that study community networks in action.
“I think this work will lead to more innovative research on how social networks can be used more effectively in promoting poverty alleviation programs in poor countries,” adds Beaman... “Other areas would include agricultural technology adoption … vaccinations for children, [and] the use of bed nets [to prevent malaria], to name just a few.”  ...

Thursday, June 06, 2013

Orphanides and Wieland: Complexity and Monetary Policy

A paper I need to read:

Complexity and Monetary Policy, by Athanasios Orphanides and Volker Wieland, CFS Working Paper: Abstract The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.

Wednesday, May 29, 2013

'Inflation in the Great Recession and New Keynesian Models'

DSGE models are "surprisingly accurate":

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide: It has been argued that existing DSGE models cannot properly account for the evolution of key macroeconomic variables during and following the recent Great Recession, and that models in which inflation depends on economic slack cannot explain the recent muted behavior of inflation, given the sharp drop in output that occurred in 2008-09. In this paper, we use a standard DSGE model available prior to the recent crisis and estimated with data up to the third quarter of 2008 to explain the behavior of key macroeconomic variables since the crisis. We show that as soon as the financial stress jumped in the fourth quarter of 2008, the model successfully predicts a sharp contraction in economic activity along with a modest and more protracted decline in inflation. The model does so even though inflation remains very dependent on the evolution of both economic activity and monetary policy. We conclude that while the model considered does not capture all short-term fluctuations in key macroeconomic variables, it has proven surprisingly accurate during the recent crisis and the subsequent recovery. [pdf]

Saturday, May 18, 2013

New Research in Economics: Self-interest vs. Greed and the Limitations of the Invisible Hand

This is from Matt Clements, Associate Professor and Chair of the Economics Department at St. Edward’s University:

Dear Professor Thoma,
Allow me to add to the flood of responses you have no doubt received to your offer to help publicize your readers’ research. The paper is called "Self-interest vs. Greed and the Limitations of the Invisible Hand," forthcoming in the American Journal of Economics and Sociology (pdf of the final version). The point of the paper is that greed, as opposed to enlightened self-interest, can be destructive. Markets always operate within some framework of laws and enforcement, and the claim that greed is good implicitly assumes that the legal framework is essentially perfect. To the extent that laws are suboptimal and enforcement is imperfect, greed can easily enrich some market participants at the expense of total surplus. All of this seemed sufficiently obvious to me that at first I wondered if the paper was even worth writing, but the referees were surprisingly difficult to convince.

Thursday, May 16, 2013

New Research in Economics: Terrorism and the Macroeconomy: Evidence from Pakistan

This is from Sultan Mehmood. The article appears in the May edition of Defense and Peace Economics, which the author describes as "a highly specialized journal on conflict":

Terrorism and the Macroeconomy: Evidence from Pakistan, by Sultan Mehmood, Journal of Defense and Peace Economics, May 2013: Summary: The study evaluates the macroeconomic impact of terrorism in Pakistan by utilizing terrorism data for around 40 years. Standard time-series methodology allows us to distinguish between short and long run effects, and it also avoids the aggregation problems in cross-country studies. The study is also one of the few that focuses on evaluating impact of terrorism on a developing country. The results show that cumulatively terrorism has cost Pakistan around 33.02% of its real national income over the sample period.
Motivation: Studies on the impact of terrorism on the economy have exclusively focused on developed countries (see e.g. Eckstein and Tsiddon, 2004). This is surprising because developing countries are not only hardest hit by terrorism, but are more responsive to external shocks. Terrorism in Pakistan, with magnitude greater than Israel, Greece, Turkey, Spain and USA combined in terms of incidents and death count, has consistently hit news headlines across the world. Yet, terrorism in Pakistan has received relatively little academic attention.
The case of Pakistan is unique for studying the impact of terrorism on the economy for a number of reasons. Firstly, Pakistan has a long and intense history of terrorism which allows one to capture the effect on the economy in the long run. Secondly, growth retarding effects of terrorism are hypothesized to be more pronounced in developing rather than developed countries (Frey et al., 2007). Thirdly, the Pakistani economy is exceptionally vulnerable to external shocks with 12 IMF programmes during 1990-2007 (IMF, 2010, 2011). Lastly, the case study of terrorism for a developing or least developing country is yet to be done. Scholars of the Copenhagen Consensus studying terrorism note the ‘need for additional case studies, especially of developing countries’ (Enders and Sandler, 2006, p. 31). This research attempts to fill this void.
Main Results: The results of the econometric investigation suggest that terrorism has cost Pakistan around 33.02% of its real national income over the sample time period of 1973–2008, with the adverse impact mainly stemming from a fall in domestic investment and lost workers’ remittances from abroad. This averages to a per annum loss of around 1% of real GDP per capita growth. Moreover, estimates from a Vector Error Correction Model (VECM) show that terrorism impacts the economy primarily through medium- and long-run channels. The article also finds that the negative effect of terrorism lasts for at least two years for most of the macroeconomic variables studied, with the adverse effect on worker remittances, a hitherto ignored factor, lasting for five years. The results are robust to different lag length structures, policy variables, structural breaks and stability tests. Furthermore, it is shown that they are unlikely to be driven by omitted variables, or [Granger type] reverse causality.
Hence, the article finds evidence that terrorism, particularly in emerging economies, might pose significant macroeconomic costs to the economy.

New Research in Economics: Robust Stability of Monetary Policy Rules under Adaptive Learning

I have had several responses to my offer to post write-ups of new research that I'll be posting over the next few days (thanks!), but I thought I'd start with a forthcoming paper from a former graduate student here at the University of Oregon, Eric Guass:

Robust Stability of Monetary Policy Rules under Adaptive Learning, by Eric Gaus, forthcoming, Southern Economics Journal: Adaptive learning has been used to assess the viability a variety of monetary policy rules. If agents using simple econometric forecasts "learn" the rational expectations solution of a theoretical model, then researchers conclude the monetary policy rule is a viable alternative. For example, Duffy and Xiao (2007) find that if monetary policy makers minimize a loss function of inflation, interest rates, and the output gap, then agents in a simple three equation model of the macroeconomy learn the rational expectations solution. On the other hand, Evans and Honkapohja (2009) demonstrates that this may not always be the case. The key difference between the two papers is an assumption over what information the agents of the model have access to. Duffy and Xiao (2007) assume that monetary policy makers have access to contemporaneous variables, that is, they adjust interest rates to current inflation and output. Evans and Honkapohja (2009) instead assume that agents only can form expectations of contemporaneous variables. Another difference between these two papers is that in Duffy and Xiao (2007) agents use all the past data they have access to, whereas in Evans and Honkapohja (2009) agents use a fixed window of data.
This paper examines several different monetary policy rules under a learning mechanism that changes how much data agents are using. It turns out that as long as the monetary policy makers are able to see contemporaneous endogenous variables (output and inflation) then the Duffy and Xiao (2007) results hold. However, if agents and policy makers use expectations of current variables then many of the policy rules are not "robustly stable" in the terminology of Evans and Honkapohja (2009).
A final result in the paper is that the switching learning mechanism can create unpredictable temporary deviations from rational expectations. This is a rather starting result since the source of the deviations is completely endogenous. The deviations appear in a model where there are no structural breaks or multiple equilibria or even an intention of generating such deviations. This result suggests that policymakers should be concerned with the potential that expectations, and expectations alone, can create exotic behavior that temporarily strays from the REE.

Wednesday, May 15, 2013

Help Me Publicize Your Research

The previous post reminds me of an offer I've been meaning to make to try to help to publicize academic research:

If you have a paper that is about to be published in an economics journal (or was recently published), send me a summary of the research explaining the findings, the significance of the work, etc. and I'd be happy to post the write-up here. It can be micro, macro, econometrics, any topic at all, but hoping for something that goes beyond a mere echo of the abstract and I want to avoid research not yet accepted for publication (so I don't have to make a judgment on the quality of the research -- I don't always have the time to read papers carefully, and they may not be in my area of expertise).

Homeowners Do Not Increase Consumption When Their Housing Prices Increase?

New and contrary results on the wealth effect for housing:

Homeowners do not increase consumption despite their property rising in value, EurekAlert: Although the value of our property might rise, we do not increase our consumption. This is the conclusion by economists from University of Copenhagen and University of Oxford in new research which is contrary to the widely believed assumption amongst economists that if there occurs a rise in house prices then a natural rise in consumption will follow. The results of the study is published in The Economic Journal.
"We argue that leading economists should not wholly be focused on monitoring the housing market. Economists are closely watching the developments on the housing market with the expectation that house prices and household consumption tend to move in tandem, but this is not necessarily the case," says Professor of Economics at University of Copenhagen, Søren Leth-Petersen.
Søren Leth-Petersen has, alongside Professor Martin Browning from University of Oxford and Associate Professor Mette Gørtz from University of Copenhagen, tested this widespread assumption of 'wealth effect' and concluded that the theory has no significant effect.
Søren Leth-Petersen explains that when economists use the theory of  'wealth effect' the presumption is that older homeowners will adjust their consumption the most when house prices change whilst younger homeowners will adjust their consumption the least. However, according to this research, most homeowners do not feel richer in line with the rise of housing wealth.
"Our research shows that homeowners aged 45 and over, do not increase their consumption significantly when the value of their property goes up, and this goes against the theory of 'wealth effect'. Thus, we are able to reject the theory as the connecting link between rising house prices and increased consumption," explains Søren Leth-Petersen. ...
The research shows that homeowners aged 45 and over did not react significantly to the rise in house prices. However, the younger homeowners, who are typically short of finances, took the opportunity to take out additional consumption loans when given the chance. ...

Tuesday, April 16, 2013

'How Much Unemployment Was Caused by Reinhart and Rogoff's Arithmetic Mistake?'

The work of Reinhart and Rogoff was a major reason for the push for austerity at a time when expansionary policy was called for, i.e. their work supported the bad idea that austerity during a recession can actually be stimulative. It isn't as the events in Europe have shown conclusively.

To be fair, as I discussed here (in "Austerity Can Wait for Sunnier Days") after watching Reinhart give a talk on this topic at an INET conference, she didn't assert that contractionary policy was somehow expansionary (i.e. she did not claim the confidence fairy would more than offset the negative short-run effects of austerity). What she asserted is that pain now -- austerity -- can avoid even more pain down the road in the form of lower economic growth.

Here's the problem. She is right that austerity causes pain in the short-run. But according to a review of her work with Rogoff discussed below, the lower growth from debt levels above 90 percent that austerity is supposed to avoid turns out, it appears, to be largely the result of errors in the research. In fact, there is no substantial growth penalty from high debt levels, and hence not much gain from short-run austerity.

Here's Dean Baker with a rundown on the new work (see also Mike Konczal who helped to shed light on this research):

How Much Unemployment Was Caused by Reinhart and Rogoff's Arithmetic Mistake?, by Dean Baker: That's the question millions will be asking when they see the new paper by my friends at the University of Massachusetts, Thomas Herndon, Michael Ash, and Robert Pollin. Herndon, Ash, and Pollin (HAP) corrected the spreadsheets of Carmen Reinhart and Ken Rogoff. They show the correct numbers tell a very different story about the relationship between debt and GDP growth than the one that Reinhart and Rogoff have been hawking.
Just to remind folks, Reinhart and Rogoff (R&R) are the authors of the widely acclaimed book on the history of financial crises, This Time is Different. They have also done several papers derived from this research, the main conclusion of which is that high ratios of debt to GDP lead to a long periods of slow growth. Their story line is that 90 percent is a cutoff line, with countries with debt-to-GDP ratios above this level seeing markedly slower growth than countries that have debt-to-GDP ratios below this level. The moral is to make sure the debt-to-GDP ratio does not get above 90 percent.
There are all sorts of good reasons for questioning this logic. First, there is good reason for believing causation goes the other way. Countries are likely to have high debt-to-GDP ratios because they are having serious economic problems.
Second, as Josh Bivens and John Irons have pointed out, the story of the bad growth in high debt years in the United States is driven by the demobilization after World War II. In other words, these were not bad economic times, the years of high debt in the United States had slow growth because millions of women opted to leave the paid labor force.
Third, the whole notion of public debt turns out to be ill-defined. ...
But HAP tells us that we need not concern ourselves with any arguments this complicated. The basic R&R story was simply the result of them getting their own numbers wrong.
After being unable to reproduce R&R's results with publicly available data, HAP were able to get the spreadsheets that R&R had used for their calculations. It turns out that the initial results were driven by simple computational and transcription errors. The most important of these errors was excluding four years of growth data from New Zealand in which it was above the 90 percent debt-to-GDP threshold..., correcting this one mistake alone adds 1.5 percentage points to the average growth rate for the high debt countries. This eliminates most of the falloff in growth that R&R find from high debt levels. (HAP find several other important errors in the R&R paper, however the missing New Zealand years are the biggest part of the story.)
This is a big deal because politicians around the world have used this finding from R&R to justify austerity measures that have slowed growth and raised unemployment. In the United States many politicians have pointed to R&R's work as justification for deficit reduction even though the economy is far below full employment by any reasonable measure. In Europe, R&R's work and its derivatives have been used to justify austerity policies that have pushed the unemployment rate over 10 percent for the euro zone as a whole and above 20 percent in Greece and Spain. In other words, this is a mistake that has had enormous consequences.
In fairness, there has been other research that makes similar claims, including more recent work by Reinhardt and Rogoff. But it was the initial R&R papers that created the framework for most of the subsequent policy debate. And HAP has shown that the key finding that debt slows growth was driven overwhelmingly by the exclusion of 4 years of data from New Zealand.
If facts mattered in economic policy debates, this should be the cause for a major reassessment of the deficit reduction policies being pursued in the United States and elsewhere. It should also cause reporters to be a bit slower to accept such sweeping claims at face value.
(Those interested in playing with the data itself can find it at the website for the Political Economic Research Institute.)

Update: Reinhart-Rogoff Response to Critique - WSJ.

Monday, March 18, 2013

Trickle-Down Consumption

Robert Frank, who has been arguing for effects, will like the results in this paper from the NBER:

Trickle-Down Consumption, by Marianne Bertrand and Adair Morse, NBER Working Paper No. 18883 Issued in March 2013 [open link]: Have rising income and consumption at the top of income distribution since the early 1980s induced households in the lower tiers of the distribution to consume a larger share of their income? Using state-year variation in income level and consumption in the top first quintile or decile of the income distribution, we find evidence for such “trickle-down consumption.” The magnitude of effect suggests that middle income households would have saved between 2.6 and 3.2 percent more by the mid-2000s had incomes at the top grown at the same rate as median income. Additional tests argue against permanent income, upwardly-biased expectations of future income, home equity effects and upward price pressures as the sole explanations for this finding. Instead, we show that middle income households’ consumption of more income elastic and more visible goods and services appear particularly responsive to top income levels, consistent with supply-driven demand and status-driven explanations for our primary finding. Non-rich households exposed to higher top income levels self-report more financial duress; moreover, higher top income levels are predictive of more personal bankruptcy filings. Finally, focusing on housing credit legislation, we suggest that the political process may have internalized and facilitated such trickle-down

Here's a nice discussion of the work from Chrystia Freeland (and why it will make Robert Frank happy): Trickle-down consumption.

Friday, March 15, 2013

Journal News (BE Journal of Theoretical Economics)

Resignations at the BE Journal of Theoretical Economics

Friday, March 08, 2013

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates

Watching John Williams give this paper:

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates, by Eric T. Swanson and John C. Williams, Federal Reserve Bank of San Francisco, January 2013: Abstract The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to many macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, standard macroeconomic theory also implies that private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for the economy, and it is unclear to what extent those yields have been constrained. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by estimating the time-varying high-frequency sensitivity of those interest rates to macroeconomic announcements relative to a benchmark period in which the zero bound was not a concern. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.

Tuesday, March 05, 2013

'Are Sticky Prices Costly? Evidence From The Stock Market'

There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Saturday, March 02, 2013

Booms and Systemic Banking Crises

Everyone at the conference seemed to like this model of endogenous banking crises (me included -- this is the non-technical summary, the paper itself is fairly technical):

Booms and Systemic Banking Crises, by Frederic Boissay, Fabrice Collard, and Frank Smets: ... Non-Technical Summary Recent empirical research on systemic banking crises (henceforth, SBCs) has highlighted the existence of similar patterns across diverse episodes. SBCs are rare events. Recessions that follow SBC episodes are deeper and longer lasting than other recessions. And, more importantly for the purpose of this paper, SBCs follow credit intensive booms; "banking crises are credit booms gone wrong" (Schularick and Taylor, 2012, p. 1032). Rare, large, adverse financial shocks could possibly account for the first two properties. But they do not seem in line with the fact that the occurrence of an SBC is not random but rather closely linked to credit conditions. So, while most of the existing macro-economic literature on financial crises has focused on understanding and modeling the propagation and the amplification of adverse random shocks, the presence of the third stylized fact mentioned above calls for an alternative approach.
In this paper we develop a simple macroeconomic model that accounts for the above three stylized facts. The primary cause of systemic banking crises in the model is the accumulation of assets by households in anticipation of future adverse shocks. The typical run of events leading to a financial crisis is as follows. A sequence of favorable, non permanent, supply shocks hits the economy. The resulting increase in the productivity of capital leads to a demand-driven expansion of credit that pushes the corporate loan rate above steady state. As productivity goes back to trend, firms reduce their demand for credit, whereas households continue to accumulate assets, thus feeding the supply of credit by banks. The credit boom then turns supply-driven and the corporate loan rate goes down, falling below steady state. By giving banks incentives to take more risks or misbehave, too low a corporate loan rate contributes to eroding trust within the banking sector precisely at a time when banks increase in size. Ultimately, the credit boom lowers the resilience of the banking sector to shocks, making systemic crises more likely.
We calibrate the model on the business cycles in the US (post WWII) and the financial cycles in fourteen OECD countries (1870-2008), and assess its quantitative properties. The model reproduces the stylized facts associated with SBCs remarkably well. Most of the time the model behaves like a standard financial accelerator model, but once in while -- on average every forty years -- there is a banking crisis. The larger the credit boom, (i) the higher the probability of an SBC, (ii) the sooner the SBC, and (iii) -- once the SBC breaks out -- the deeper and the longer the recession. In our simulations, the recessions associated with SBCs are significantly deeper (with a 45% larger output loss) than average recessions. Overall, our results validate the role of supply-driven credit booms leading to credit busts. This result is of particular importance from a policy making perspective as it implies that systemic banking crises are predictable. We indeed use the model to compute the k-step ahead probability of an SBC at any point in time. Fed with actual US data over the period 1960-2011, the model yields remarkably realistic results. For example, the one-year ahead probability of a crisis is essentially zero in the 60-70s. It jumps up twice during the sample period: in 1982-3, just before the Savings & Loans crisis, and in 2007-9. Although very stylized, our model thus also provides with a simple tool to detect financial imbalances and predict future crises.

'Monetary Policy Alternatives at the Zero Bound: Lessons from the 1930s'

This paper from the SF Fed conference might be of interest (it's a bit technical in some sections):

Monetary Policy Alternatives at the Zero Bound: Lessons from the 1930s U.S. February, 2013 Christopher Hanes: Abstract: In recent years economists have debated two unconventional policy options for situations when overnight rates are at the zero bound: boosting expected inflation through announced changes in policy objectives such as adoption of price-level or nominal GDP targets; and large-scale asset purchases to lower long-term rates by pushing down term or risk premiums - “portfolio- balance” effects. American policies in the 1930s, when American overnight rates were at the zero bound, created experiments that tested the effectiveness of the expected-inflation option, and the existence of portfolio-balance effects. In data from the 1930s, I find strong evidence of portfolio- balance effects but no clear evidence of the expected-inflation channel.

(The discussants seemed to like the paper, but the results for expectations channel drew more questions than the results for the portfolio-balance effects.)

Friday, March 01, 2013

FRBSF Conference: The Past and Future of Monetary Policy

I am here today:

In 1913, President Woodrow Wilson signed the Federal Reserve Act into law, and the Federal Reserve System was created. In recognition of the centennial of the Fed's founding, the Economic Research Department of the Federal Reserve Bank of San Francisco is sponsoring a research conference on the theme “The Past and Future of Monetary Policy.”

Agenda

Morning Session Chair: John Fernald, Federal Reserve Bank of San Francisco

8:15 AM Continental Breakfast

8:50 AM Welcoming Remarks: John Williams, President, Federal Reserve Bank of San Francisco

9:00 AM  Robert Hall, Stanford University, Ricardo Reis, Columbia University, Controlling Inflation and Maintaining Central Bank Solvency under New-Style Central Banking  Discussants: John Leahy, New York University, Carl Walsh, University of California, Santa Cruz

10:15 AM Break

10:35 AM Christopher Gust, Federal Reserve Board, David Lopez-Salido, Federal Reserve Board, Matthew Smith, Federal Reserve Board, The Empirical Implications of the Interest-Rate Lower Bound, Discussants: Martin Eichenbaum, Northwestern University, Christopher Sims, Princeton University

11:50 AM Break

12:00 PM Lunch – Market Street Dining Room, Fourth Floor, Introduction: Glenn Rudebusch, Director of Research, Federal Reserve Bank of San Francisco, Speaker: Lars Svensson, Deputy Governor, Riksbank

Afternoon Session Chair: Eric Swanson, Federal Reserve Bank of San Francisco

1:15 PM Anna Cieslak, Kellogg School of Management, Northwestern University, Pavol Povala, Stern School of Business, New York University, Expecting the Fed, Discussants: Kenneth Singleton, Stanford Graduate School of Business, Mark Watson, Princeton University

2:30 PM Break

2:45 PM Frederic Boissay, European Central Bank, Fabrice Collard, University of Bern, Frank Smets, European Central Bank, Booms and Systemic Banking Crises, Discussants: Lawrence Christiano, Northwestern University, Mark Gertler, New York University

4:00 PM Break

4:15 PM Christopher Hanes, SUNY Binghamton, Monetary Policy Alternatives at the Zero Bound: Lessons from the 1930s U.S., Discussants; Gary Richardson, University of California, Irvine, James Hamilton, University of California, San Diego

5:30 PM Reception – West Market Street Lounge, Fourth Floor

6:15 PM Dinner – Market Street Dining Room, Fourth Floor, Introduction: John Williams, President, Federal Reserve Bank of San Francisco, Speaker: Ben Bernanke, Chairman, Federal Reserve Board of Governors

Wednesday, February 27, 2013

2013 West Coast Trade Workshop

If any academics happen to be in Eugene this weekend:

2013 West Coast Trade Workshop (link)

All sessions will be held in the Walnut room in the Inn at the 5th.

Saturday March 2nd

8:15 am-10:15 am Session 1 – Innovation and Growth (Chair – Nicholas Sly)

10:15am - 10:30am Coffee Break

10:30am- 12:30pm Session 2 – International Trade and Worker Skills (Chair – Bruce Blonigen)

12:30pm-2pm Lunch Break 2pm-4pm Session 3 – Foreign Direct Investment (Chair – Jennifer Poole)

 Evening Hosted Group Dinner

Sunday, March 3rd ­8:30am-10:30am Session 4 – Consequences of the Trade Liberalization (Chair – Alan Spearot)

10:30am – 10:45am Coffee Break 10:45am-12:45pm Session 5 – Offshoring (Chair – Anca Cristea)

Adjourn

Monday, February 18, 2013

Jordi Galí: Monetary Policy and Rational Asset Price Bubbles

Another paper to read:

Monetary Policy and Rational Asset Price Bubbles, by Jordi Galí, NBER Working Paper No. 18806, February 2013 [open link]: Abstract I examine the impact of alternative monetary policy rules on a rational asset price bubble, through the lens of an overlapping generations model with nominal rigidities. A systematic increase in interest rates in response to a growing bubble is shown to enhance the fluctuations in the latter, through its positive effect on bubble growth. The optimal monetary policy seeks to strike a balance between stabilization of the bubble and stabilization of aggregate demand. The paper's main findings call into question the theoretical foundations of the case for "leaning against the wind" monetary policies.

What's the key mechanism working against the traditional "lean against the wind" policy? That rational bubbles grow at the rate of interest, hence raising (real) interest rates makes the bubble grow faster. From the introduction:

...The role that monetary policy should play in containing ... bubbles has been the subject of a heated debate, well before the start of the recent crisis. The consensus view among most policy makers in the pre-crisis years was that central banks should focus on controlling inflation and stabilizing the output gap, and thus ignore asset price developments, unless the latter are seen as a threat to price or output stability. Asset price bubbles, it was argued, are difficult if not outright impossible to identify or measure; and even if they could be observed, the interest rate would be too blunt an instrument to deal with them, for any significant adjustment in the latter aimed at containing the bubble may cause serious "collateral damage" in the form of lower prices for assets not affected by the bubble, and a greater risk of an economic downturn.1

But that consensus view has not gone unchallenged, with many authors and policy makers arguing that the achievement of low and stable inflation is not a guarantee of financial stability and calling for central banks to pay special attention to developments in asset markets.2 Since episodes of rapid asset price inflation often lead to a financial and economic crisis, it is argued, central banks should act preemptively ... by raising interest rates sufficiently to dampen or bring to an end any episodes of speculative frenzy -- a policy often referred to as "leaning against the wind." ...

Independently of one's position in the previous debate, it is generally taken for granted (a) that monetary policy can have an impact on asset price bubbles and (b) that a tighter monetary policy, in the form of higher short-term nominal interest rates, may help disinflate such bubbles. In the present paper I argue that such an assumption is not supported by economic theory and may thus lead to misguided policy advice, at least in the case of bubbles of the rational type considered here. The reason for this can be summarized as follows: in contrast with the fundamental component of an asset price, which is given by a discounted stream of payoffs, the bubble component has no payoffs to discount. The only equilibrium requirement on its size is that the latter grow at the rate of interest, at least in expectation. As a result, any increase in the (real) rate engineered by the central bank will tend to increase the size of the bubble, even though the objective of such an intervention may have been exactly the opposite. Of course, any decline observed in the asset price in response to such a tightening of policy is perfectly consistent with the previous result, since the fundamental component will generally drop in that scenario, possibly more than offsetting the expected rise in the bubble component.

Below I formalize that basic idea... The paper's main results can be summarized as follows:

  • Monetary policy cannot affect the conditions for existence (or nonexistence) of a bubble, but it can influence its short-run behavior, including the size of its fluctuations.
  • Contrary to the conventional wisdom a stronger interest rate response to bubble fluctuations (i.e. a "leaning against the wind policy") may raise the volatility of asset prices and of their bubble component.
  • The optimal policy must strike a balance between stabilization of current aggregate demand -- which calls for a positive interest rate response to the bubble -- and stabilization of the bubble itself (and hence of future aggregate demand) which would warrant a negative interest rate response to the bubble. If the average size of the bubble is sufficiently large the latter motive will be dominant, making it optimal for the central bank to lower interest rates in the face of a growing bubble.

...

But before we lower interest rates in response to signs of an inflating bubble, it would be good to heed this warning from the conclusion:

Needless to say the conclusions should not be taken at face value when it comes to designing actual policies. This is so because the model may not provide an accurate representation of the challenges facing actual policy makers. In particular, it may very well be the case that actual bubbles are not of the rational type and, hence, respond to monetary policy changes in ways not captured by the theory above. In addition, the model above abstracts from many aspects of actual economies that may be highly relevant when designing monetary policy in bubbly economies, including the presence of frictions and imperfect information in financial markets. Those caveats notwithstanding, the analysis above may be useful by pointing out a potentially important missing link in the case for "leaning against the wind" policies.

Wednesday, February 13, 2013

'Asset Quality Misrepresentation by Financial Intermediaries: Evidence from RMBS Market'

I need to read this paper:

Asset Quality Misrepresentation by Financial Intermediaries: Evidence from RMBS Market, by Tomasz Piskorski, Amit Seru, and James Witkin: Abstract: We contend that buyers received false information about the true quality of assets in contractual disclosures by intermediaries during the sale of mortgages in the $2 trillion non-agency market. We construct two measures of misrepresentation of asset quality -- misreported occupancy status of borrower and misreported second liens -- by comparing the characteristics of mortgages disclosed to the investors at the time of sale with actual characteristics of these loans at that time that are available in a dataset matched by a credit bureau. About one out of every ten loans has one of these misrepresentations. These misrepresentations are not likely to be an artifact of matching error between datasets that contain actual characteristics and those that are reported to investors. At least part of this misrepresentation likely occurs within the boundaries of the financial industry (i.e., not by borrowers). The propensity of intermediaries to sell misrepresented loans increased as the housing market boomed, peaking in 2006. These misrepresentations are costly for investors, as ex post delinquencies of such loans are more than 60% higher when compared with otherwise similar loans. Lenders seem to be partly aware of this risk, charging a higher interest rate on misrepresented loans relative to otherwise similar loans, but the interest rate markup on misrepresented loans does not fully reflect their higher default risk. Using measures of pricing used in the literature, we find no evidence that these misrepresentations were priced in the securities at their issuance. A significant degree of misrepresentation exists across all reputable intermediaries involved in sale of mortgages. The propensity to misrepresent seems to be unrelated to measures of incentives for top management, to quality of risk management inside these firms or to regulatory environment in a region. Misrepresentations on just two relatively easy-to-quantify dimensions of asset quality could result in forced repurchases of mortgages by intermediaries in upwards of $160 billion.

Friday, February 08, 2013

Have Blog, Will Travel: NBER Research Meeting

I am here today:

National Bureau of Economic Research Research Meeting
Matthias Doepke and Emmanuel Farhi, Organizers
Federal Reserve Bank of San Francisco
101 Market Street San Francisco, CA

PROGRAM

8:30 am Continental Breakfast

9:00 am Zhen Huo, University of Minnesota Jose-Victor Rios-Rull, University of Minnesota and NBER Engineering a Paradox of Thrift Recession Discussant: Mark Aguiar, Princeton University and NBER

10:00 am Coffee Break

10:30 am Simeon Alder, University of Notre Dame David Lagakos, Arizona State University Lee Ohanian, University of California at Los Angeles and NBER The Decline of the U.S. Rust Belt: A Macroeconomic Analysis Discussant: Leena Rudanko, Boston University and NBER

11:30 am Raj Chetty, Harvard University and NBER John Friedman, Harvard University and NBER Soren Leth-Petersen, University of Copenhagen Torben Nielsen, The Danish National Centre for Social Research Tore Olsen, Harvard University Active vs. Passive Decisions and Crowd-Out in Retirement Savings Accounts: Evidence from Denmark Discussant: Christopher Carroll, Johns Hopkins University

12:30 pm Lunch

1:30 pm Lawrence Christiano, Northwestern University and NBER Martin Eichenbaum, Northwestern University and NBER Mathias Trabandt, Federal Reserve Board Unemployment and Business Cycles Discussant: Robert Shimer, University of Chicago and NBER

2:30 pm Coffee Break

3:00 pm Andrew Atkeson, University of California at Los Angeles and NBER Andrea Eisfeldt, University of California at Los Angeles Pierre-Olivier Weill, University of California at Los Angeles and NBER The Market for OTC Derivatives Discussant: Gustavo Manso, University of California at Berkeley

4:00 pm Greg Kaplan, Princeton University and NBER Guido Menzio, University of Pennsylvania and NBER Shopping Externalities and Self-Fulfilling Unemployment Fluctuations Discussant: Martin Schneider, Stanford University and NBER

5:00 pm Adjourn

5:15 pm Reception and Dinner

Monday, January 28, 2013

Gorton and Ordonez: The Supply and Demand for Safe Assets

I need to read this:

The Supply and Demand for Safe Assets, by Gary Gorton and Guillermo Ordonez, January 2013, NBER [open link]: Abstract There is a demand for safe assets, either government bonds or private substitutes, for use as collateral. Government bonds are safe assets, given the governments’ power to tax, but their supply is driven by fiscal considerations, and does not necessarily meet the private demand for safe assets. Unlike the government, the private sector cannot produce riskless collateral. When the private sector reaches its limit (the quality of private collateral), government bonds are net wealth, up to the governments own limits (taxation capacity). The economy is fragile to the extent that privately-produced safe assets are relied upon. In a crisis, government bonds can replace private assets that do not sustain borrowing anymore, raising welfare.

Tuesday, January 22, 2013

'Wealth Effects Revisited: 1975-2012'

Housing cycles matter:

Wealth Effects Revisited: 1975-2012, by Karl E. Case, John M. Quigley, Robert J. Shiller, NBER Working Paper No. 18667, January 2013 [open link, previous version]: We re-examine the links between changes in housing wealth, financial wealth, and consumer spending. We extend a panel of U.S. states observed quarterly during the seventeen-year period, 1982 through 1999, to the thirty-seven year period, 1975 through 2012Q2. Using techniques reported previously, we impute the aggregate value of owner-occupied housing, the value of financial assets, and measures of aggregate consumption for each of the geographic units over time. We estimate regression models in levels, first differences and in error-correction form, relating per capita consumption to per capita income and wealth. We find a statistically significant and rather large effect of housing wealth upon household consumption. This effect is consistently larger than the effect of stock market wealth upon consumption.
In our earlier version of this paper we found that households increase their spending when house prices rise, but we found no significant decrease in consumption when house prices fall. The results presented here with the extended data now show that declines in house prices stimulate large and significant decreases in household spending.
The elasticities implied by this work are large. An increase in real housing wealth comparable to the rise between 2001 and 2005 would, over the four years, push up household spending by a total of about 4.3%. A decrease in real housing wealth comparable to the crash which took place between 2005 and 2009 would lead to a drop of about 3.5%

Thursday, January 03, 2013

Bad Advice from Experts, Herding, and Bubbles

Here's the introduction to a paper I'm giving at the AEA meetings (journal version of paper). The model in the paper, which is a variation of the Brock and Hommes (1998) generalization of the Lucas (1978) asset pricing model, shows that bad advice from experts can increase the likelihood of harmful financial bubbles:

Bad Advice from Experts, Herding, and Bubbles: The belief that housing prices would continue to rise into the foreseeable future was an important factor in creating the housing price bubble. But why did people believe this? Why did they become convinced, as they always do prior to a bubble, that this time was different? One reason is bad advice from academic and industry experts. Many people turned to these experts when housing prices were inflating and asked if we were in a bubble. The answer in far too many cases – almost all when they had an opinion at all – was that no, this wasn’t a bubble. Potential homebuyers were told there were real factors such as increased immigration, zoning laws, resource constraints in an increasingly globalized economy, and so on that would continue to drive up housing prices.

When the few economists who did understand that housing prices were far above their historical trends pointed out that a typical bubble pattern had emerged – both Robert Shiller and Dean Baker come to mind – they were mostly ignored. Thus, both academic and industry economists helped to convince people that the increase in prices was permanent, and that they ought to get in on the housing boom as soon as possible.

But why did so few economists warn about the bubble? And more importantly for the model presented in this paper, why did so many economists validate what turned out to be destructive trend-chasing behavior among investors?

One reason is that economists have become far too disconnected from the lessons of history. As courses in economic history have faded from graduate programs in recent decades, economists have become much less aware of the long history of bubbles. This has caused a diminished ability to recognize the housing bubble as it was inflating. And worse, the small amount of recent experience we have with bubbles has led to complacency. We were able to escape, for example, the stock bubble crash of 2001 without too much trouble. And other problems such as the Asian financial crisis did not cause anything close to the troubles we had after the housing bubble collapsed, or the troubles other bubbles have caused throughout history.

Economists did not have the historical perspective they needed, and there was confidence that even if a bubble did appear policymakers would be able to clean it up without too much damage. As Robert Lucas said in his 2003 presidential address to the American Economic Association, the “central problem of depression-prevention has been solved.” We no longer needed to worry about big financial meltdowns of the type that caused so many problems in the 1800s and early 1900s. But in reality economists hardly knew what to look for, did not fully understand the dangers, and were hence unconcerned even if they did suspect that housing prices were out of line with the underlying fundamentals.

A second factor is the lack of deep institutional knowledge of the markets academic economists study. Theoretical models are idealized, pared down versions of reality intended to capture the fundamental issues relative to the question at hand. Because of their mathematical complexity, macro models in particular are highly idealized and only capture a few real world features such as sticky prices and wages. Economists who were intimately familiar with these highly stylized models assumed they were just as familiar with the markets the models were intended to represent. But the models were not up to the task at hand,[1] and when the models failed to signal that a bubble was coming there was no deep institutional knowledge to rely upon. There was nothing to give the people using these models a hint that they were not capturing important features of real world markets.

These two disconnects – from history and from the finer details of markets – made it much more likely that economists would certify that this time was different, that fundamentals such as population growth, immigration, financial innovation, could explain the run-up in housing prices.

The model in this paper examines the implications of these two disconnects and shows that when experts endorse the idea that this time is different and cause herding toward incorrect beliefs about the future, it increases the likelihood that a large, devastating bubble will occur.

_____________________

[1] See Wieland and Wolters (2011) for an overview of the forecasting performance of macroeconomic models before, during, and after the crisis.

Monday, December 24, 2012

Smart Machines, A New Guide to Keynes, and The Inefficient Markets Hypothesis

Haven't read these papers yet, but looks like I should (I added open links when I could find them):

First, Sachs and Kotlikoff (anything to keep these two from writing about the deficit and the accumulated debt is, in my view, a plus):

Smart Machines and Long-Term Misery, by Jeffrey D. Sachs, Laurence J. Kotlikoff, NBER Working Paper No. 18629, Issued in December 2012: Are smarter machines our children’s friends? Or can they bring about a transfer from our relatively unskilled children to ourselves that leaves our children and, indeed, all our descendants – worse off?
This, indeed, is the dire message of the model presented here in which smart machines substitute directly for young unskilled labor, but complement older skilled labor. The depression in the wages of the young then limits their ability to save and invest in their own skill acquisition and physical capital. This, in turn, means the next generation of young, initially unskilled workers, encounter an economy with less human and physical capital, which further drives down their wages. This process stabilizes through time, but potentially entails each newborn generation being worse off than its predecessor.
We illustrate the potential for smart machines to engender long-term misery in a highly stylized two-period model. We also show that appropriate generational policy can be used to transform win-lose into win-win for all generations.

Next, Jordi Gali revisits Keynes:

Notes for a New Guide to Keynes (I): Wages, Aggregate Demand, and Employment, by Jordi Galí, NBER Working Paper No. 18651, Issued in December 2012 [open link]: I revisit the General Theory's discussion of the role of wages in employment determination through the lens of the New Keynesian model. The analysis points to the key role played by the monetary policy rule in shaping the link between wages and employment, and in determining the welfare impact of enhanced wage flexibility. I show that the latter is not always welfare improving.

Finally, Roger Farmer, Carine Nourry, and Alain Venditti on whether "competitive financial markets efficiently allocate risk" (according to this, they don't):

The Inefficient Markets Hypothesis: Why Financial Markets Do Not Work Well in the Real World, Roger E.A. Farmer, Carine Nourry, Alain Venditti, NBER Working Paper No. 18647, Issued in December 2012 [open link]: Existing literature continues to be unable to offer a convincing explanation for the volatility of the stochastic discount factor in real world data. Our work provides such an explanation. We do not rely on frictions, market incompleteness or transactions costs of any kind. Instead, we modify a simple stochastic representative agent model by allowing for birth and death and by allowing for heterogeneity in agents' discount factors. We show that these two minor and realistic changes to the timeless Arrow-Debreu paradigm are sufficient to invalidate the implication that competitive financial markets efficiently allocate risk. Our work demonstrates that financial markets, by their very nature, cannot be Pareto efficient, except by chance. Although individuals in our model are rational; markets are not.

Friday, December 21, 2012

'A Pitfall with DSGE-Based, Estimated, Government Spending Multipliers'

This paper, which I obviously think is worth noting, is forthcoming in AEJ Macroeconomics:

A Pitfall with DSGE-Based, Estimated, Government Spending Multipliers, by Patrick Fève,  Julien Matheron, Jean-Guillaume Sahuc, December 5, 2012: 1 Introduction Standard practice in estimation of dynamic stochastic general equilibrium (DSGE) models, e.g. the well-known work by Smets and Wouters (2007), is to assume that government consumption expenditures are described by an exogenous stochastic process and are separable in the households’ period utility function. This standard practice has been adopted in the most recent analyses of fiscal policy (e.g. Christiano, Eichenbaum and Rebelo, 2011, Coenen et al., 2012, Cogan et al., 2010, Drautzburg and Uhlig, 2011, Eggertsson, 2011, Erceg and Lindé, 2010, Fernández-Villaverde, 2010, Uhlig, 2010).
In this paper, we argue that both short-run and long-run government spending multipliers (GSM) obtained in this literature may be downward biased. This is so because the standard approach does not typically allow for the possibility that private consumption and government spending are Edgeworth complements in the utility function[1] and that government spending has an endogenous countercyclical component (automatic stabilizer)... Since, as we show, the GSM increases with the degree of Edgeworth complementarity,... the standard empirical approach may ... result in a downward-biased estimate of the GSM.
In our benchmark empirical specification with Edgeworth complementarity and a countercyclical component of policy, the estimated long-run multiplier amounts to 1.31. Using the same model..., when both Edgeworth complementarity and the countercyclical component of policy are omitted,... the estimated multiplier is approximately equal to 0.5. Such a difference is clearly not neutral if the model is used to assess recovery plans of the same size as those recently enacted in the US. To illustrate this more concretely, we feed the American Recovery and Reinvestment Act (ARRA) fiscal stimulus package into our model. We obtain that omitting the endogenous policy rule at the estimation stage would lead an analyst to underestimate the short-run GSM by slightly more than 0.25 points. Clearly, these are not negligible figures. ...
_____
1 We say that private consumption and government spending are Edgeworth complements/substitutes when an increase in government spending raises/diminishes the marginal utility of private consumption. Such a specification has now become standard, following the seminal work by Aschauer (1985), Bailey (1971), Barro (1981), Braun (1994), Christiano and Eichenbaum (1992), Finn (1998), McGrattan (1994).

Let me also add these qualifications from the conclusion:

In our framework, we have deliberately abstracted from relevant details... However, the recent literature insists on other modeling issues that might potentially affect our results. We mention two of them. First, as put forth by Leeper, Plante and Traum (2010), a more general specification of government spending rule, lump-sum transfers, and distortionary taxation is needed to properly fit US data. This richer specification includes in addition to the automatic stabilizer component, a response to government debt and co-movement between tax rates. An important quantitative issue may be to assess which type of stabilization (automatic stabilization and/or debt stabilization) interacts with the estimated degree of Edgeworth complementarity. Second, Fiorito and Kollintzas (2004) have suggested that the degree of complementarity/substitutability between government and private consumptions is not homogeneous over types of public expenditures. This suggests to disaggregate government spending and inspect how feedback rules affect the estimated degree of Edgeworth complementarity in this more general setup. These issues will constitute the object of future researches.

Tuesday, December 11, 2012

'What Does the New CRA Paper Tell Us?'

Mike Konczal:

What Does the New Community Reinvestment Act (CRA) Paper Tell Us?, by Mike Konczal: There are two major, critical questions that show up in the literature surrounding the 1977 Community Reinvestment Act (CRA).
The first question is how much compliance with the CRA changes the portfolio of lending institutions. Do they lend more often and to riskier people, or do they lend the same but put more effort into finding candidates? The second question is how much did the CRA lead to the expansion of subprime lending during the housing bubble. Did the CRA have a significant role in the financial crisis?   There's a new paper on the CRA, Did the Community Reinvestment Act (CRA) Lead to Risky Lending?, by Agarwal, Benmelech, Bergman and Seru, h/t Tyler Cowen, with smart commentary already from Noah Smith. (This blog post will use the ungated October 2012 paper for quotes and analysis.) This is already being used as the basis for an "I told you so!" by the conservative press, which has tried to argue that the second question is most relevant. However, it is important to understand that this paper answers the first question, while, if anything, providing evidence against the conservative case for the second. ...
"the very small share of all higher-priced loan originations that can reasonably be attributed to the CRA makes it hard to imagine how this law could have contributed in any meaningful way to the current subprime crisis." ...

Monday, November 05, 2012

'Managing a Liquidity Trap: Monetary and Fiscal Policy'

I like Stephen Williamson a lot better when he puts on his academic cap. I learned something from this:

Managing a Liquidity Trap: Monetary and Fiscal Policy

I disagree with him about the value of forward guidance, though I wouldn't bet the recovery on this one mechanism, but it's a nice discussion of the underlying issues.

I was surprised to see this reference to fiscal policy:

I've come to think of the standard New Keynesian framework as a model of fiscal policy. The basic sticky price (or sticky wage) inefficiency comes from relative price distortions. Particularly given the zero lower bound on the nominal interest rate, monetary policy is the wrong vehicle for addressing the problem. Indeed, in Werning's model we can always get an efficient allocation with appropriately-set consumption taxes (see Correia et al., for example). I don't think the New Keynesians have captured what monetary policy is about.

For some reason, I thought he was adamantly opposed to fiscal policy interventions. But I think I'm missing something here -- perhaps he is discussing what this particular model says, or what NK models say more generally, rather than what he believes and endorses. After all, he's not a fan of the NK framework. In any case, in addition to whatever help monetary policy can provide, as just noted in the previous post I agree that fiscal policy has an important role to play in helping the economy recover.

Maurizio Bovi: Are You a Good Econometrician? No, I am British (With a Response from George Evans)

Via email, Maurizio Bovi describes a paper of his on adaptive learning (M. Bovi (2012). "Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?" Journal of Economic Dynamics and Control). A colleague of mine, George Evans -- a leader in this area -- responds:

Are you a good econometrician? No, I am British!, by Maurizio Bovi*: A typical assumption of mainstream strands of research is that agents’ expectations are grounded in efficient econometric models. Muthian agents are all equally rational and know the true model. The adaptive learning literature assumes that agents are boundedly rational in the sense that they are as smart as econometricians and that they are able to learn the correct model. The predictor choice approach argues that individuals are boundedly rational in the sense that agents switch to the forecasting rule that has the highest fitness. Preferences could generate enduring inertia in the dynamic switching process and a stationary environment for a sufficiently long period is necessary to learn the correct model. Having said this, all the cited approaches typically argue that there is a general tendency to forecast via optimal forecasting models because of the costs stemming from inefficient predictions.
To the extent that the representative agent’s beliefs i) are based on efficient (in terms of minimum MSE=mean squared forecasting errors) econometric models, and ii) can be captured by ad hoc surveys, two basic facts emerge, stimulating my curiosity. First, in economic systems where the same simple model turns out to be the best predictor for a sufficient span of time survey expectations should tend to converge: more and more individuals should learn or select it. Second, the forecasting fitness of this enduring minimum MSE econometric model should not be further enhanced by the use of information provided by survey expectations. If agents act as if they were statisticians in the sense that they use efficient forecasting rules, then survey-based beliefs must reflect this and cannot contain any statistically significant information that helps reduce the MSE relative to the best econometric predictor. In sum, there could be some value in analyzing hard data  and survey beliefs to understand i) whether these latter derive from optimal econometric models and ii) the time connections between survey-declared and efficient model-grounded expectations. By examining real-time GDP dynamics in the UK I have found that, over a time-span of two decades, the adaptive expectations (AE) model systematically outperforms other standard predictors which, as argued by the above recalled literature, should be in the tool-box of representative econometricians (Random Walk, ARIMA, VAR). As mentioned, this peculiar environment should eventually lead to increased homogeneity in best-model based expectations. However data collected in the surveys managed by the Business Surveys Unit of the European Commission (European Commission, 2007) highlight that great variety in expectations persists. Figure 1 shows that in the UK the number of optimists and pessimists tend to be rather similar at least since the inception of data1 availability (1985).

Bovi

In addition, evidence points to one-way information flows going from survey data to econometric models. In particular, Granger-causality, variance decomposition and Geweke’s instantaneous feedback tests suggest that the accuracy of the AE forecasting model can be further enhanced by the use of the information provided by the level of disagreement across survey beliefs. That is, as per GDP dynamics in the UK, the expectation feedback system looks like an open loop where possibly non-econometrically based beliefs play a key role with respect to realizations. All this affects the general validity of the widespread assumption that representative agents’ beliefs derive from optimal econometric models.
Results are robust to several methods of quantifications of qualitative survey observations as well as to standard forecasting rules estimated both recursively and via optimal-size rolling windows. They are also in line both with the literature supporting the non-econometrically-based content of the information captured by surveys carried out on laypeople and, interpreting MSE as a measure of volatility, with the stylized fact on the positive correlation between dispersion in beliefs and macroeconomic uncertainty.
All in all, our evidence raises some intriguing questions: Why do representative UK citizens seem to be systematically more boundedly rational than what is usually hypothesized in the adaptive learning literature and the predictor choice approach? What does it persistently hamper them to use the most accurate statistical model? Are there econometric (objective) or psychological (subjective) impediments?
____________________
*Italian National Institute of Statistics (ISTAT), Department of Forecasting and Economic Analysis. The opinions expressed herein are those of the author (E-mail mbovi@istat.it) and do not necessarily reflect the views of ISTAT.
[1] The question is “How do you expect the general economic situation in the country to develop over the next 12 months?” Respondents may reply “it will…: i) get a lot better, ii) get a little better, iii) stay the same, iv) get a little worse, v) get a lot worse, vi) I do not know. See European Commission (1997).
References
European Commission (2007). The Joint Harmonised EU Programme of Business and Consumer Surveys, User Guide, European Commission, Directorate-General for Economic and Financial Affairs, July.
M. Bovi (2012). “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?” Journal of Economic Dynamics and Control DOI: 10.1016/j.jedc.2012.10.005.

Here's the response from George Evans:

Comments on Maurizio Bovi, “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models?”, by George Evans, University of Oregon: This is an interesting paper that has a lot of common ground with the adaptive learning literature. The techniques and a number of the arguments will be familiar to those of us who work in adaptive learning. The tenets of the adaptive learning approach can be summarized as follows: (1) Fully “rational expectations” (RE) are implausibly strong and implicitly ignores a coordination issue that arises because economic outcomes are affected by the expectations of firms and households (economic “agents”). (2) A more plausible view is that agents have bounded rationality with a degree of rationality comparable to economists themselves (the “cognitive consistency principle”). For example agents’ expectations might be based on statistical models that are revised and updated over time. On this approach we avoid assuming that agents are smarter than economists, but we also recognize that agents will not go on forever making systematic errors. (3) We should recognize that economic agents, like economists, do not agree on a single forecasting model. The economy is complex. Therefore, agents are likely to use misspecified models and to have heterogeneous expectations.
The focus of the adaptive learning literature has changed over time. The early focus was on whether agents using statistical learning rules would or would not eventually converge to RE, while the main emphasis now is on the ways in which adaptive learning can generate new dynamics, e.g. through discounting of older data and/or switching between forecasting models over time. I use the term “adaptive learning” broadly, to include, for example, the dynamic predictor selection literature.
Bovi’s paper “Are the Representative Agent’s Beliefs Based on Efficient Econometric Models” argues that with respect to GDP growth in the UK the answer to his question is no because 1) there is a single efficient econometric model, which is a version of AE (adaptive expectations), and 2) agents might be expected therefore to have learned to adopt this optimal forecasting model over time. However the degree of heterogeneity of expectations has not fallen over time, and thus agents are failing to learn to use the best forecasting model.
From the adaptive learning perspective, Bovi’s first result is intriguing, and merits further investigation, but his approach will look very familiar to those of us who work in adaptive learning. And the second point will surprise few of us: the extent of heterogeneous expectations is well-known, as is the fact that expectations remain persistently heterogeneous, and there is considerable work within adaptive learning that models this heterogeneity.
More specifically:
1) Bovi’s “efficient” model uses AE with the adaptive expectations parameter gamma updated over time in a way that aims to minimize the squared forecast error. This is in fact a simple adaptive learning model, which was proposed and studied in Evans and Ramey, “Adaptive expectations, underparameterization and the Lucas critique”, Journal of Monetary Economics (2006). We there suggested that agents might want to use AE as an optimal choice for a parsimonious (underparameterized) forecasting rule, showed what would determine the optimal choice of gamma, and provided an adaptive learning algorithm that would allow agents to update their choice of gamma over time in order to track unknown structural change. (Our adaptive learning rule exploits the fact that AE can be viewed as the forecast that arises from an IMA(1,1) time-series model, and in our rule the MA parameter is estimated and updated recursively using a constant gain rule.)
2) At the same time I am suspicious that economists will agree that there is a single best way to forecast GDP growth. For the US there is a lot of work by numerous researchers that strongly indicates that choosing between univariate time-series models is controversial, i.e. there appears to be no single clearly best univariate forecasting model, and (ii) forecasting models for GDP growth should be multivariate and should include both current & lagged unemployment rates and the consumption to GDP ratio. Other forecasters have found a role for nonlinear (Markov-switching) dynamics. Thus I doubt that there will be agreement by economists on a single best forecasting model for GDP growth or other key macro variables. Hence we should expect households and firms also to entertain multiple forecasting models, and for different agents to use different models.
3) Even if there were a single forecasting model that clearly dominated, one would not expect homogeneity of expectations across agents or for heterogeneity to disappear over time. In Evans and Honkapohja, “Learning as a Rational Foundation for Macroeconomics and Finance”, forthcoming 2013 in R Frydman and E Phelps, Rethinking Expectations: The Way Forward for Macroeconomics, we point out that variations across agents in the extent of discounting and the frequency with which agents update parameter estimates, as well as the inclusion of idiosyncratic exogenous expectation shocks, will give rise to persistent heterogeneity. There are costs to forecasting, and some agents will have larger benefits from more accurate forecasts than other agents. For example, for some agents the forecast method advocated by Bovi will be too costly and an even simpler forecast will be adequate (e.g. a RW forecast that the coming year will be like last year, or a forecast based on mean growth over, say, the last five years).
4) When there are multiple models potentially in play, as there always is, the dynamic predictor selection approach initiated by Brock and Hommes means that because of varying costs of forecast methods, and heterogeneous costs across agents, not all agents will want to use what appears to be the best performing model. We therefore expect heterogeneous expectations at any moment in time. I do not regard this as a violation of the cognitive consistency principle – even economists will find that in some circumstances in their personal decision-making they use more boundedly rational forecast methods than in other situations in which the stakes are high.
In conclusion, here is my two sentence summary for Maurizio Bovi: Your paper will find an interested audience among those of us who work in this area. Welcome to the adaptive learning approach. 
George Evans

Saturday, October 27, 2012

Inequality of Income and Consumption

Via an email from Lane Kenworthy, here's more research contradicting the claim made by Kevin Hassett and Aparna Mathur in the WSJ that consumption inequality has not increased (here's my response summarizing additional work contradicting their claim, a claim that is really an attempt to blunt the call to use taxation to address the growing inequality problem):

Inequality of Income and Consumption: Measuring the Trends in Inequality from 1985-2010 for the Same Individuals, by Jonathan Fisher, David S. Johnson, and Timothy M. Smeeding: I. Introduction: Income and Consumption The 2012 Economic Report of the President stated: “The confluence of rising inequality and low economic mobility over the past three decades poses a real threat to the United States as a land of opportunity.” This view was also repeated in a speech by Council of Economics Advisors Chairman, Alan Krueger (2012). President Obama suggested that inequality was “…the defining issue of our time...” As suggested by Isabel Sawhill (2012), 2011 was the year of inequality.

While there has been an increased interest in inequality, and especially the differences in trends for the top 1 percent vs. the other 99 percent, this increase in inequality is not a new issue. Twenty years ago, Sylvia Nasar (1992) highlighted similar differences in referring to a report by the Congressional Budget Office (CBO) and Paul Krugman introduced the “staircase vs. picket fence” analogy (see Krugman (1992)). He showed that the change in income gains between 1973 and 1993 followed a staircase pattern with income growth rates increasing with income quintiles, a pattern that has been highlighted by many recent studies, including the latest CBO (2011) report. He also showed that the income growth rates were similar for all quintiles from 1947-1973, creating a picket fence pattern across the quintiles.

Recent research shows that income inequality has increased over the past three decades (Burkhauser, et al. (2012), Smeeding and Thompson (2011), CBO (2011), Atkinson, Piketty and Saez (2011)). And most research suggests that this increase is mainly due to the larger increase in income at the very top of the distribution (see CBO (2011) and Saez (2012)). Researchers, however, dispute the extent of the increase. The extent of the increase depends on the resource measure used (income or consumption), the definition of the resource measure (e.g., market income or after-tax income), and the population of interest.

This paper examines the distribution of income and consumption in the US using data that obtains measures of both income and consumption from the same set of individuals and this paper develops a set of inequality measures that show the increase in inequality during the past 25 years using the 1984-2010 Consumer Expenditure (CE) Survey.

The dispute over whether income or consumption should be preferred as a measure of economic well-being is discussed in the National Academy of Sciences (NAS) report on poverty measurement (Citro and Michael (1995), p. 36). The NAS report argues:

Conceptually, an income definition is more appropriate to the view that what matters is a family’s ability to attain a living standard above the poverty level by means of its own resources…. In contrast to an income definition, an expenditure (or consumption) definition is more appropriate to the view that what matters is someone’s actual standard of living, regardless of how it is attained. In practice the availability of high-quality data is often a prime determinant of whether an incomeor expenditure-based family resource definition is used.

We agree with this statement and we would extend it to inequality measurement.[1] In cases where both measures are available, both income and consumption are important indicators for the level of and trend in economic well-being. As argued by Attanasio, Battistin, and Padula (2010) “...the joint consideration of income and consumption can be particularly informative.” Both resource measures provide useful information by themselves and in combination with one another. When measures of inequality and economic well-being show the same levels and trends using both income and consumption, then the conclusions on inequality are clear. When the levels and/or trends are different, the conclusions are less clear, but useful information and an avenue for future research can be provided.

We examine the trend in the distribution of these measures from 1985 to 2010. We show that while the level of and changes in inequality differ for each measure, inequality increases for all measures over this period and, as expected, consumption inequality is lower than income inequality. Differing from other recent research, we find that the trends in income and consumption inequality are similar between 1985 and 2006, and diverge during the first few years of the Great Recession (between 2006 and 2010). For the entire 25 year period we find that consumption inequality increases about two-thirds as much as income inequality. We show that the quality of the CE survey data is sufficient to examine both income and consumption inequality. Nevertheless, given the differences in the trends in inequality, using measures of both income and consumption provides useful information. In addition, we present the level of and trends in inequality of both the maximum and the minimum of income and consumption. The maximum and minimum are useful to adjust for life-cycle effects of income and consumption and for potential measurement error in income or consumption. The trends in the maximum and minimum are also useful when consumption and income alone provide different results concerning the measurement of economic well-being. ...

Friday, October 26, 2012

NBER Economic Fluctuations & Growth Research Meeting

I was supposed to be here today, but a long flight delay made that impossible. (I know better than to route through San Francisco in the fall -- it is often fogged in all morning -- but I took a chance and lost the bet.):

National Bureau of Economic Research
Economics Fluctuations & Growth Research Meeting
Paul Beaudry and John Leahy, Organizers
October 26, 2012 Federal Reserve Bank of New York
10th Floor Benjamin Strong Room
33 Liberty Street New York, NY
Program
Thursday, October 25:
6:30 pm Reception and Dinner Federal Reserve Bank of New York (enter at 44 Maiden Lane) 1st Floor Dining Room
Friday, October 26:
8:30 am Continental Breakfast
9:00 am Chang-Tai Hsieh, University of Chicago and NBER Erik Hurst, University of Chicago and NBER Charles Jones, Stanford University and NBER Peter Klenow, Stanford University and NBER The Allocation of Talent and U.S. Economic Growth Discussant:  Raquel Fernandez, New York University and NBER
10:00 am Break
10:30 am Fatih Guvenen, University of Minnesota and NBER Serdar Ozkan, Federal Reserve Board Jae Song, Social Security Administration The Nature of Countercyclical Income Risk Discussant:  Jonathan Heathcote, Federal Reserve Bank of Minneapolis
11:30 am Loukas Karabarbounis, University of Chicago and NBER Brent Neiman, University of Chicago and NBER Declining Labor Shares and the Global Rise of Corporate Savings Discussant:  Robert Hall, Stanford University and NBER
12:30 pm Lunch
1:30 pm Stephanie Schmitt-Grohe, Columbia University and NBER Martin Uribe, Columbia University and NBER Prudential Policy for Peggers Discussant:  Gianluca Benigno, London School of Economics
2:30 pm Break
3:00 pm Eric Swanson, Federal Reserve Bank of San Francisco John Williams, Federal Reserve Bank of San Francisco Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates Discussant:  James Hamilton, University of California at San Diego and NBER
4:00 pm Alisdair McKay, Boston University Ricardo Reis, Columbia University and NBER The Role of Automatic Stabilizers in the U.S. Business Cycle Discussant:  Yuriy Gorodnichenko, University of California at Berkeley and NBER
5:00 pm Adjourn

Wednesday, October 24, 2012

The Myth that Growing Consumption Inequality is a Myth

Kevin Hassett and Aparna Mathur argue that consumption inequality has not increased along with income inequality. That's not what recent research says, but before getting to that, here's their argument:

Consumption and the Myths of Inequality, by Kevin Hassett and Aparna Mathur, Commentary, WSJ: In multiple campaign speeches over the past week, President Obama has emphasized a theme central to Democratic campaigns across the country this year: inequality. ... To be sure, there are studies of income inequality—most prominently by Thomas Piketty of the Paris School of Economics and Emmanuel Saez of the University of California at Berkeley—that report that the share of income of the wealthiest Americans has grown over the past few decades while the share of income at the bottom has not. The studies have problems. Some omit worker compensation in the form of benefits. And economist Alan Reynolds has noted that changes to U.S. tax rules cause more income to be reported at the top and less at the bottom. But even if the studies are accepted at face value, as a read on the evolution of inequality, they leave out too much.

Let me break in here. Here's what Piketty and Saez say about Reynold's work:

In his December 14 article, “The Top 1% … of What?”, Alan Reynolds casts doubts on the interpretation of our results showing that the share of income going to the top 1% families has doubled from 8% in 1980 to 16% in 2004. In this response, we want to outline why his critiques do not invalidate our findings and contain serious misunderstandings on our academic work. ...

Back to Hassett and Mathur

Another way to look at people's standard of living over time is by their consumption. Consumption is an even more relevant metric of overall welfare than pre-tax cash income, and it will be set by consumers with an eye on their lifetime incomes. Economists, including Dirk Krueger and Fabrizio Perri of the University of Pennsylvania, have begun to explore consumption patterns, which show a different picture than research on income.

Let me break in again and deal with the Krueger and Perri Krueger and Perri (2006) paper, which followed the related work by Slesnick (2001):

Has Consumption Inequality Mirrored Income Inequality?: This paper by Mark Aguiar and Mark Bils finds that "consumption inequality has closely tracked income inequality over the period 1980-2007":

Has Consumption Inequality Mirrored Income Inequality?, by Mark A. Aguiar and Mark Bils, NBER Working Paper No. 16807, February 2011: Abstract We revisit to what extent the increase in income inequality over the last 30 years has been mirrored by consumption inequality. We do so by constructing two alternative measures of consumption expenditure, using data from the Consumer Expenditure Survey (CE). We first use reports of active savings and after tax income to construct the measure of consumption implied by the budget constraint. We find that the consumption inequality implied by savings behavior largely tracks income inequality between 1980 and 2007. Second, we use a demand system to correct for systematic measurement error in the CE's expenditure data. ...This second exercise indicates that consumption inequality has closely tracked income inequality over the period 1980-2007. Both of our measures show a significantly greater increase in consumption inequality than what is obtained from the CE's total household expenditure data directly.

Why is this important? (see also "Is Consumption the Grail for Inequality Skeptics?"):

An influential paper by Krueger and Perri (2006), building on related work by Slesnick (2001), uses the CE to argue that consumption inequality has not kept pace with income inequality.

And these results have been used by some -- e.g. those who fear corrective action such as an increase in the progressivity of taxes -- to argue that the inequality problem is not as large as figures on income inequality alone suggest. But the bottom line of this paper is that:

The ... increase in consumption inequality has been large and of a similar magnitude as the observed change in income inequality.

So they are citing what is now dated work. They either don't know about the more recent work, or simply chose to ignore it because it doesn't say what they need it to say.

Okay, back to Hassett and Mathur once again. They go on to cite their own work -- more on that below. One thing to note, however, is that the recent research in this area says the data they use must be corrected for measurement error or you are likely to find the (erroneous) results they find. As far as I can tell, the data are not corrected:

Our recent study, "A New Measure of Consumption Inequality," found that the consumption gap across income groups has remained remarkably stable over time. ...
While this stability is something to applaud, surely more important are the real gains in consumption by income groups over the past decade. From 2000 to 2010, consumption has climbed 14% for individuals in the bottom fifth of households, 6% for individuals in the middle fifth, and 14.3% for individuals in the top fifth when we account for changes in U.S. population and the size of households. This despite the dire economy at the end of the decade.

Should we trust this research? First of all this is Kevin Hassett. How much do you trust the work once you know that? Second, it's on the WSJ editorial page. How much does that reduce your trust? I'd hope the answer is "quite a bit." Third, big red flags when researchers cherry pick start and/or end dates. Fourth, as already noted, recent research shows that the no growth in consumption inequality result is due to measurement error in the CES data. When the data are corrected, consumption inequality mirrors income inequality. They don't say a word about correcting the data.

Next, we get the "but they have cell phones!" argument:

Yet the access of low-income Americans—those earning less than $20,000 in real 2009 dollars—to devices that are part of the "good life" has increased. The percentage of low-income households with a computer rose... Appliances? The percentage of low-income homes with air-conditioning equipment..., dishwashers..., a washing machine..., a clothes dryer..., [and] microwave ovens... grew... Fully 75.5% of low-income Americans now have a cell phone, and over a quarter of those have access to the Internet through their phones.

Before turning to their conclusion, let me note more new research in this area from a post earlier this year, But They Have TVs and Cell Phones!, emphasizing the measurement error problem:

Consumption Inequality Has Risen About As Fast As Income Inequality, by Matthew Yglesias: Going back a few years one thing you used to hear about America's high and rising level of income inequality is that it wasn't so bad because there wasn't nearly as much inequality of consumption. This story started to fall apart when it turned out that ever-higher levels of private indebtedness were unsustainable (nobody could have predicted...) but Orazio Attanasio, Erik Hurst, and Luigi Pistaferri report in a new NBER working paper "The Evolution of Income, Consumption, and Leisure Inequality in The US, 1980-2010" that the apparently modest increase in consumption inequality is actually a statistical error.
They say that the Consumer Expenditure Survey data from which the old-school finding is drawn is plagued by non-classical measurement error and adopt four different approaches to measuring consumption inequality that shouldn't be hit by the same problem. All four alternatives point in the same direction: "consumption inequality within the U.S. between 1980 and 2010 has increased by nearly the same amount as income inequality."

Here's Hassett and Mathur's ending:

It is true that the growth of the safety net has contributed to massive government deficits—and a larger government that likely undermines economic growth and job creation. It is an open question whether the nation will be able to reshape the net in order to sustain it, but reshape it we must. ...

After arguing (wrongly) that consumption has kept pace with income, they say it's only because of the deficit -- but it's not sustainable. So suck it up middle class America, consumption inequality has increased despite the claims of denialists like Hassett, and if they get their way and reduce the social safety net, it will only get worse.

Hassett and company denied that income inequality was growing for years (notice their attempt to do just that in the first paragraph by citing discredited research from Alan Reynolds), then when the evidence made it absolutely clear they were wrong (surprise!), they switched to consumption inequality. Recent evidence says they're wrong about that too.

Friday, October 12, 2012

'Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero'

[This one is wonkish. It's (I think) one of the more important papers from the St. Louis Fed conference.]

One thing that doesn't get enough attention in DSGE models, at least in my opinion, is the constraints, implicit assumptions, etc. imposed when the theoretical model is log-linearized. This paper by Tony Braun and Yuichiro Waki helps to fill that void by comparing a theoretical true economy to its log-linearized counterpart, and showing that the results of the two models can be quite different when the economy is at the zero bound. For example, multipliers that are greater than two in the log-linearized version are smaller -- usually near one -- in the true model (thus, fiscal policy remains effective, but may need to be more aggressive than the log-linear model would imply). Other results change as well, and there are sign changes in some cases, leading the authors to conclude that "we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps."

Here's part of the introduction and the conclusion to the paper:

Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero, by Tony Braun and Yuichiro Waki: Abstract Does fiscal policy have qualitatively different effects on the economy in a liquidity trap? We analyze a nonlinear stochastic New Keynesian model and compare the true and log-linearized equilibria. Using the log-linearized equilibrium conditions the answer to the above question is yes. However, for the true nonlinear model the answer is no. For a broad range of empirically relevant parameterizations labor falls in response to a tax cut in the log-linearized economy but rises in the true economy. While the government purchase multiplier is above two in the log-linearized economy it is about one in the true economy.
1 Introduction The recent experiences of Japan, the United States, and Europe with zero/near-zero nominal interest rates have raised new questions about the conduct of monetary and fiscal policy in a liquidity trap. A large and growing body of new research has emerged that provides answers using New Keynesian (NK) frameworks that explicitly model the zero bound on the nominal interest rate. One conclusion that has emerged is that fiscal policy has different effects on the economy when the nominal interest rate is zero. Eggertsson (2011) finds that hours worked fall in response to a labor tax cut when the nominal interest rate is zero, a property that is referred to as the “paradox of toil,” and Christiano, Eichenbaum, and Rebelo (2011), Woodford (2011) and Erceg and Lindé (2010) find that the size of the government purchase multiplier is substantially larger than one when the nominal interest rate is zero.
These and other results ( see e.g. Del Negro, Eggertsson, Ferrero, and Kiyotaki (2010), Bodenstein, Erceg, and Guerrieri (2009), Eggertsson and Krugman (2010)) have been derived in setups that respect the nonlinearity in the Taylor rule but loglinearize the remaining equilibrium conditions about a steady state with a stable price level. Log-linearized NK models require large shocks to generate a binding zero lower bound for the nominal interest rate and the shocks must be even larger if these models are to reproduce the measured declines in output and inflation that occurred during the Great Depression or the Great Recession of 2007-2009.[1] Log-linearizations are local solutions that only work within a given radius of the point where the approximation is taken. Outside of this radius these solutions break down (See e.g. Den Haan and Rendahl (2009)). The objective of this paper is to document that such a breakdown can occur when analyzing the zero bound.
We study the properties of a nonlinear stochastic NK model when the nominal interest rate is constrained at its zero lower bound. Our tractable framework allows us to provide a partial analytic characterization of equilibrium and to numerically compute all equilibria when the zero interest state is persistent. There are no approximations needed when computing equilibria and our numerical solutions are accurate up to the precision of the computer. A comparison with the log-linearized equilibrium identifies a severe breakdown of the log-linearized approximate solution. This breakdown occurs when using parameterizations of the model that reproduce the U.S. Great Depression and the U.S. Great Recession.
Conditions for existence and uniqueness of equilibrium based on the log-linearized equilibrium conditions are incorrect and offer little or no guidance for existence and uniqueness of equilibrium in the true economy. The characterization of equilibrium is also incorrect.
These three unpleasant properties of the log-linearized solution have the implication that relying on it to make inferences about the properties of fiscal policy in a liquidity trap can be highly misleading. Empirically relevant parameterization/shock combinations that yield the paradox of toil in the log-linearized economy produce orthodox responses of hours worked in the true economy. The same parameterization/shock combinations that yield large government purchases multipliers in excess of two in the log-linearized economy, produce government purchase multipliers as low as 1.09 in the nonlinear economy. Indeed, we find that the most plausible parameterizations of the nonlinear model have the property that there is no paradox of toil and that the government purchase multiplier is close to one.
We make these points using a stochastic NK model that is similar to specifications considered in Eggertsson (2011) and Woodford (2011). The Taylor rule respects the zero lower bound of the nominal interest rate, and a preference discount factor shock that follows a two state Markov chain produces a state where the interest rate is zero. We assume Rotemberg (1996) price adjustment costs, instead of Calvo price setting. When log-linearized, this assumption is innocuous - the equilibrium conditions for our model are identical to those in Eggertsson (2011) and Woodford (2011), with a suitable choice of the price adjustment cost parameter. Moreover, the nonlinear economy doesn’t have any endogenous state variables, and the equilibrium conditions for hours and inflation can be reduced to two nonlinear equations in these two variables when the zero bound is binding.[2]
These two nonlinear equations are easy to solve and are the nonlinear analogues of what Eggertsson (2011) and Eggertsson and Krugman (2010) refer to as “aggregate demand” (AD) and “aggregate supply” (AS) schedules. This makes it possible for us to identify and relate the sources of the approximation errors associated with using log-linearizations to the shapes and slopes of these curves, and to also provide graphical intuition for the qualitative differences between the log-linear and nonlinear economies.
Our analysis proceeds in the following way. We first provide a complete characterization of the set of time invariant Markov zero bound equilibria in the log-linearized economy. Then we go on to characterize equilibrium of the nonlinear economy. Finally, we compare the two economies and document the nature and source of the breakdowns associated with using log-linearized equilibrium conditions. An important distinction between the nonlinear and log-linearized economy relates to the resource cost of price adjustment. This cost changes endogenously as inflation changes in the nonlinear model and modeling this cost has significant consequences for the model’s properties in the zero bound state. In the nonlinear model a labor tax cut can increase hours worked and decrease inflation when the interest rate is zero. No equilibrium of the log-linearized model has this property. We show that this and other differences in the properties of the two models is precisely due to the fact that the resource cost of price adjustment is absent from the resource constraint of the log-linearized model.[3] ...
...
5 Concluding remarks In this paper we have documented that it can be very misleading to rely on the log-linearized economy to make inferences about existence of an equilibrium, uniqueness of equilibrium or to characterize the local dynamics of equilibrium. We have illustrated that these problems arise in empirically relevant parameterizations of the model that have been chosen to match observations from the Great Depression and Great Recession.
We have also documented the response of the economy to fiscal shocks in calibrated versions of our nonlinear model. We found that the paradox of toil is not a robust property of the nonlinear model and that it is quantitatively small even when it occurs. Similarly, the evidence presented here suggests that the government purchase GDP multiplier is not much above one in our nonlinear economy.
Although we encountered situations where the log-linearized solution worked reasonably well and the model exhibited the paradox of toil and a government purchase multiplier above one, the magnitude of these effects was quantitatively small. This result was also very tenuous. There is no simple characterization of when the log-linearization works well. Breakdowns can occur in regions of the parameter space that are very close to ones where the log-linear solution works. In fact, it is hard to draw any conclusions about when one can safely rely on log-linearized solutions in this setting without also solving the nonlinear model. For these reasons we believe that the safest way to proceed is to entirely avoid the common practice of log-linearizing the model around a stable price level when analyzing liquidity traps.
This raises a question. How should one proceed with solution and estimation of medium or large scale NK models with multiple shocks and endogenous state variables when considering episodes with zero nominal interest rates? One way forward is proposed in work by Adjemian and Juillard (2010) and Braun and Körber (2011). These papers solve NK models using extended path algorithms.
We conclude by briefly discussing some extensions of our analysis. In this paper we assumed that the discount factor shock followed a time-homogeneous two state Markov chain with no shock being the absorbing state. In our current work we relax this final assumption and consider general Markov switching stochastic equilibria in which there are repeated swings between episodes with a positive interest rate and zero interest rates. We are also interested in understanding the properties of optimal monetary policy in the nonlinear model. Eggertsson and Woodford (2003), Jung, Teranishi, and Watanabe (2005), Adam and Billi (2006), Nakov (2008), and Werning (2011) consider optimal monetary policy problems subject to a non-negativity constraint on the nominal interest rate, using implementability conditions derived from log-linearized equilibrium conditions. The results documented here suggest that the properties of an optimal monetary policy could be different if one uses the nonlinear implementability conditions instead.
[1] Eggertsson (2011) requires a 5.47% annualized shock to the preference discount factor in order to account for the large output and inflation declines that occurred in the Great Depression. Coenen, Orphanides, and Wieland (2004) estimate a NK model to U.S. data from 1980-1999 and find that only very large shocks produce a binding zero nominal interest rate.
[2] Under Calvo price setting, in the nonlinear economy a particular moment of the price distribution is an endogenous state variable and it is no longer possible to compute an exact solution to the equilibrium.
[3] This distinction between the log-linearized and nonlinear resource constraint is not specific to our model of adjustment costs but also arises under Calvo price adjustment (see e.g. Braun and Waki (2010)).

Qualitative Easing: How it Works and Why it Matters

From the Fed conference in St. Louis, Roger Farmer makes what I think is a useful distinction between quantitative easing and qualitative easing (the distinction, first made by Buiter in 2008, is useful inependent of his paper; in the paper he argues that it's the composition of the balance sheet, not the size, that matters -- in the model people cannot participate in financial markets that open before they are born leading to incomplete participation -- qualitative easing works by completing markets and having the Fed engage in Pareto improving trades):

Qualitative Easing: How it Works and Why it Matters, by Roger E.A. Farmer: Abstract This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.
1 Introduction Central banks throughout the world have recently engaged in two kinds of unconventional monetary policies: quantitative easing (QE), which is “an increase in the size of the balance sheet of the central bank through an increase it is monetary liabilities”, and qualitative easing (QuaE) which is “a shift in the composition of the assets of the central bank towards less liquid and riskier assets, holding constant the size of the balance sheet.”[1]
I have made the case, in a recent series of books and articles, (Farmer, 2006, 2010a,b,c,d, 2012, 2013), that qualitative easing can stabilize economic activity and that a policy of this kind will increase economic welfare. In this paper I provide an economic model that shows how qualitative easing works and why it matters.
Because qualitative easing is conducted by the central bank, it is often classified as a monetary policy. But because it adds risk to the public balance sheet that is ultimately borne by the taxpayer, QuaE is better thought of as a fiscal or quasi-fiscal policy (Buiter, 2010). This distinction is important because, in order to be effective, QuaE necessarily redistributes resources from one group of agents to another.
The misclassification of QuaE as monetary policy has led to considerable confusion over its effectiveness and a misunderstanding of the channel by which it operates. For example, in an influential piece that was presented at the 2012 Jackson Hole Conference, Woodford (2012) made the claim that QuaE is unlikely to be effective and, to the extent that it does stimulate economic activity, that stimulus must come through the impact of QuaE on the expectations of financial market participants of future Fed policy actions.
The claim that QuaE is ineffective, is based on the assumption that it has no effect on the distribution of resources, either between borrowers and lenders in the current financial markets, or between current market participants and those yet to be born. I will argue here, that that assumption is not a good characterization of the way that QuaE operates, and that QuaE is effective precisely because it alters the distribution of resources by effecting Pareto improving trades that agents are unable to carry out for themselves.
I make the case for qualitative easing by constructing a simple general equilibrium model where agents are rational, expectations are rational and the financial markets are complete. My work differs from most conventional models of financial markets because I make the not unreasonable assumption, that agents cannot participate in financial markets that open before they are born. In this environment, I show that qualitative easing changes asset prices and that a policy where the central bank uses QuaE to stabilize the value of the stock market is Pareto improving and is costless to implement.
My argument builds upon an important theoretical insight due to Cass and Shell (1983), who distinguish between intrinsic uncertainty and extrinsic uncertainty. Intrinsic uncertainty is a random variable that influences the fundamentals of the economy; preferences, technologies and endowments. Extrinsic uncertainty is anything that does not. Cass and Shell refer to extrinsic uncertainty as sunspots.[2]
In this paper, I prove four propositions. First, I show that employment, consumption and the real wage are a function of the amount of outstanding private debt. Second, I prove that the existence of complete insurance markets is insufficient to prevent the existence of equilibria where employment, consumption and the real wage differ in different states, even when all uncertainty is extrinsic. Third, I introduce a central bank and I show that a central bank swap of safe for risky assets will change the relative price of debt and equity. Finally, I prove that a policy of stabilizing the value of the stock market is welfare improving and that it does not involve a cost to the taxpayer in any state of the world.
...

10 Conclusion An asset price stabilization policy is now under discussion as a result of the failure of traditional monetary policy to move the economy out of the current recession. Most of the academic literature sees the purchase of risky assets by the central bank as an alternative form of monetary policy. In this view, if a central bank asset policy works at all, it works by signaling the intent of future policy makers to keep interest rates low for a longer period than would normally be warranted, once the economy begins to recover. In my view, that argument is incorrect.

Central bank asset purchases have little if anything to do with traditional monetary policy. In some models, asset swaps by the central banks are effective because the central bank has the monopoly power to print money. Although that channel may play a secondary role when the interest rate is at the zero lower bound (Farmer, 2013), it is not the primary channel through which qualitative easing affects asset prices. Central bank open market operations in risky assets are effective because government has the ability to complete the financial markets by standing in for agents who are unable to transact before they are born and it is a policy that would be effective, even in a world where money was not needed as a medium of exchange.

I have made the case, in a recent series of books and articles (Farmer, 2006, 2010a,b,c,d, 2012, 2013), that qualitative easing matters. In this paper I have provided an economic model that shows why it matters.

[1] The quote is from Willem Buiter (2008) who proposed this very useful taxonomy in a piece on his ‘Maverecon’ Financial Times blog.
[2] This is quite different from the original usage of the term by Jevons (1878) who developed a theory of the business cycle, driven by fluctuations in agricultural conditions that were ultimately caused by physical sunspot activity.

Thursday, October 11, 2012

'Job Polarization and Jobless Recoveries'

Interesting paper (it's being presented as I type this):

The Trend is the Cycle: Job Polarization and Jobless Recoveries, by Nir Jaimovich and Henry E. Siu, NBER: Abstract Job polarization refers to the recent disappearance of employment in occupations in the middle of the skill distribution. Jobless recoveries refers to the slow rebound in aggregate employment following recent recessions, despite recoveries in aggregate output. We show how these two phenomena are related. First, job polarization is not a gradual process; essentially all of the job loss in middle-skill occupations occurs in economic downturns. Second, jobless recoveries in the aggregate are accounted for by jobless recoveries in the middle-skill occupations that are disappearing.
1 Introduction In the past 30 years, the US labor market has seen the emergence of two new phenomena: "job polarization" and "jobless recoveries." Job polarization refers to the increasing concentration of employment in the highest- and lowest-wage occupations, as job opportunities in middle-skill occupations disappear. Jobless recoveries refer to periods following recessions in which rebounds in aggregate output are accompanied by much slower recoveries in aggregate employment. We argue that these two phenomena are related.
Consider first the phenomenon of job polarization. Acemoglu (1999), Autor et al. (2006), Goos and Manning (2007), and Goos et al. (2009) (among others) document that, since the 1980s, employment is becoming increasingly concentrated at the tails of the occupational skill distribution. This hollowing out of the middle has been linked to the disappearance of jobs focused on "routine" tasks -- those activities that can be performed by following a well-defined set of procedures. Autor et al. (2003) and the subsequent literature demonstrates that job polarization is due to progress in technologies that substitute for labor in routine tasks.1
In this same time period, Gordon and Baily (1993), Groshen and Potter (2003), Bernanke (2003), and Bernanke (2009) (among others) discuss the emergence of jobless recoveries. In the past three recessions, aggregate employment continues to decline for years following the turning point in aggregate income and output. No consensus has yet emerged regarding the source of these jobless recoveries.
In this paper, we demonstrate that the two phenomena are connected to each other. We make two related claims. First, job polarization is not simply a gradual phenomenon: the loss of middle-skill, routine jobs is concentrated in economic downturns. Specifically, 92% of the job loss in these occupations since the mid-1980s occurs within a 12 month window of NBER dated recessions (that have all been characterized by jobless recoveries). In this sense, the job polarization "trend" is a business "cycle" phenomenon. This contrasts to the existing literature, in which job polarization is oftentimes depicted as a gradual phenomenon, though a number of researchers have noted that this process has been accelerated by the Great Recession (see Autor (2010); and Brynjolfsson and McAfee (2011)). Our First point is that routine employment loss happens almost entirely in recessions.
Our second point is that job polarization accounts for jobless recoveries. This argument is based on three facts. First, employment in the routine occupations identified by Autor et al. (2003) and others account for a significant fraction of aggregate employment; averaged over the jobless recovery era, these jobs account for more than 50% of total employment. Second, essentially all of the contraction in aggregate employment during NBER dated recessions can be attributed to recessions in these middle-skill, routine occupations. Third, jobless recoveries are observed only in these disappearing, middle-skill jobs. The high- and low-skill occupations to which employment is polarizing either do not experience contractions, or if they do, rebound soon after the turning point in aggregate output. Hence, jobless recoveries can be traced to the disappearance of routine occupations in recessions. Finally, it is important to note that jobless recoveries were not observed in routine occupations (nor in aggregate employment) prior to the era of job polarization. ...
[1] See also Firpo et al. (2011), Goos et al. (2011), and the references therein regarding the role of outsourcing and offshoring in job polarization

Here are few graphs showing routine versus non-routine employment changes. The point is that in the period of increased job polarization, most of the job losses have occurred during recessions (the first graph is non-routine cognitive, the second is non-routine manual, and the third is routine -- the third graph shows the best, after 1990 job losses are no recovered after the recession ends as they were in earlier years

Slowrecovery1
Slowrecovery2
Slowrecovery3
Notes: Data from the Bureau of Labor Statistics, Current Population Survey. See Appendix A for details. NR COG = non-routine cognitive; NR MAN = non-routine manual; R = routine.

Interestingly, the paper argues the explanation for jobless recoveries is not an education story:

The share of low educated workers in the labor force (i.e., those with high school diplomas or less) has declined in the last three decades, and these workers exhibit greater business cycle sensitivity than those with higher education. It is thus reasonable to conjecture that the terms "routine" and "low education" are interchangeable. In what follows, we show that this is not the case.

And it's not a manufacturing story:

we first demonstrate that job loss in manufacturing accounts for only a fraction of job polarization. Secondly, we show that the jobless recoveries experienced in the past 30 years cannot be explained by jobless recoveries in the manufacturing sector.

So what story is it? The answer is in the conclusion to the paper:

In the last 30 years the US labor market has been characterized by job polarization and jobless recoveries. In this paper we demonstrate how these are related. We first show that the loss of middle-skill, routine jobs is concentrated in economic downturns. In this sense, the job polarization trend is a business cycle phenomenon. Second, we show that job polarization accounts for jobless recoveries. This argument is based on the fact that almost all of the contraction in aggregate employment during recessions can be attributed to job losses in middle-skill, routine occupations (that account for a large fraction of total employment), and that jobless recoveries are observed only in these disappearing routine jobs since job polarization began. We then propose a simple search-and-matching model of the labor market with occupational choice to rationalize these facts. We show how a trend in routine-biased technological change can lead to job polarization that is concentrated in downturns, and recoveries from these recessions that are jobless.

That is, in recessions, the job separation rate isn't much different than in the past. But the job finding rate is much lower. Thus, the story is that recessions generate job separations, and "In the recession, job separations were concentrated among the middle-skill, routine workers. The recovery in aggregate employment then depends on the post-recession job finding rate of these workers now searching," and this finding rate is low (and when jobs are found, the outcome is polarizing).

Have Blog, Will Travel: 37th Annual Federal Reserve Bank of St. Louis Fall Conference

I am here (later) today and tomorrow:

37th Annual Federal Reserve Bank of St. Louis Fall Conference, October 11-12, 2012

All conference events will take place at the Federal Reserve Bank of St. Louis Gateway Conference Center, Sixth Floor

Thursday, October 11, 2012

12:00-12:30 P.M. Light lunch

12:30-12:45 P.M. Introductory remarks by Christopher Waller

Session I

12:45-2:45 P.M. "Some Unpleasant Properties of Log-Linearized Solutions when the Nominal Interest Rate is Zero" Presenter: Tony Braun, FRB Atlanta Coauthors:  Yuichiro Waki, University of Queensland and Lena Koerber, London School of Economics

"The Trend is the Cycle: Job Polarization and Jobless Recoveries" Presenter: Nir Jaimovich, Duke University Coauthor: Henry Siu, University of British Columbia

2:45-3:00 P.M Coffee Break

Session II

3:00-5:00 P.M. "Liquidity, Assets and Business Cycles" Presenter: Shouyong Shi, University of Toronto

"Spatial Equilibrium with Unemployment and Wage Bargaining: Theory and Evidence" Presenter: Paul Beaudry, University of British Columbia Coauthors: David Green, University of British Columbia and Benjamin Sand, York University

5:00-5:15 P.M. Coffee Break

Session III

5:15-6:15 P.M. "On the Social Usefulness of Fractional Reserve Banking" Presenter:  Chris Phelan, University of Minnesota and FRB Minneapolis Coauthor:  V.V. Chari, University of Minnesota and FRB Minneapolis

6:15-7:00 P.M. Reception

7:00 P.M. Dinner with speech by James Bullard

Friday, October 12, 2012

8:30-9:00 A.M. Continental Breakfast

Session I

9:00-11:00 A.M "Crisis and Commitment: Inflation Credibility and the Vulnerability to Sovereign Debt Crises" Presenter: Mark Aguiar Princeton University Coauthors: Manuel Amador, Stanford, Gita Gopinath, Harvard, and Emmanuel Farhi, Harvard

"The Market for OTC Credit Derivative" Presenter:  Pierre-Olivier Weill, UCLA Coauthors: Andy Atkeson, UCLA and Andrea Eisfeldt, UCLA

11:00-11:15 A.M Coffee Break

Session II

11:15-12:15 P.M. "Overborrowing, Financial Crises and 'Macro-prudential' Policy" Presenter: Enrique Mendoza, University of Maryland Coauthor: Javier Bianchi, University of Wisconsin

12:15-1:30 P.M. Lunch

Session III

1:30-2:30 P.M. "Qualitative Easing: How it Works and Why it Matters"
Presenter: Roger Farmer, UCLA

2:30-2:45 P.M. Coffee Break

Session IV

2:45-3:45 P.M. "Costly Labor Adjustment: Effects of China's Employment Regulations" Presenter: Russ Cooper, European University Institute Coauthors: Guan Gong, Shanghai University and Ping Yan, Peking University

3:45 P.M. Adjourn

Monday, October 08, 2012

'Trimmed-Mean Inflation Statistics'

Preliminary evidence from Brent Meyer and Guhan Venkatu of the Cleveland Fed shows that the median CPI is a robust measure of underlying inflation trends:

Trimmed-Mean Inflation Statistics: Just Hit the One in the Middle Brent Meyer and Guhan Venkatu: This paper reinvestigates the performance of trimmed-mean inflation measures some 20 years since their inception, asking whether there is a particular trimmed-mean measure that dominates the median CPI. Unlike previous research, we evaluate the performance of symmetric and asymmetric trimmed-means using a well-known equality of prediction test. We find that there is a large swath of trimmed-means that have statistically indistinguishable performance. Also, while the swath of statistically similar trims changes slightly over different sample periods, it always includes the median CPI—an extreme trim that holds conceptual and computational advantages. We conclude with a simple forecasting exercise that highlights the advantage of the median CPI relative to other standard inflation measures.

In the introduction, they add:

In general, we find aggressive trimming (close to the median) that is not too asymmetric appears to deliver the best forecasts over the time periods we examine. However, these “optimal” trims vary slightly across periods and are never statistically superior to the median CPI. Given that the median CPI is conceptually easy for the public to understand and is easier to reproduce, we conclude that it is arguably a more useful measure of underlying inflation for forecasters and policymakers alike.

And they conclude the paper with:

While we originally set out to find a single superior trimmed-mean measure, we could not conclude as such. In fact, it appears that a large swath of candidate trims hold statistically indistinguishable forecasting ability. That said, in general, the best performing trims over a variety of time periods appear to be somewhat aggressive and almost always include symmetric trims. Of this set, the median CPI stands out, not for any superior forecasting performance, but because of its conceptual and computational simplicity—when in doubt, hit the one in the middle.
Interestingly, and contrary to Dolmas (2005) we were unable to find any convincing evidence that would lead us to choose an asymmetric trim. While his results are based on components of the PCE chain-price index, a large part (roughly 75% of the initial release) of the components comprising the PCE price index are directly imported from the CPI. It could be the case that the imputed PCE components are creating the discrepancy. The trimmed-mean PCE series currently produced by the Federal Reserve Bank of Dallas trims 24 percent from the lower tail and 31 percent from the upper tail of the PCE price-change distribution. This particular trim is relatively aggressive and is not overly asymmetric—two features consistent with the best performing trims in our tests.
Finally, even though we failed to best the median CPI in our first set of tests, it remains the case that the median CPI is generally a better forecaster of future inflation over policy-relevant time horizons (i.e. inflation over the next 2-3 years) than the headline and core CPI.

One note. They are not saying that trimmed or median statistics are the best way to measure the cost of living for a household. They are asking what variable has the most predictive power for future (untrimmed, non-core, i.e. headline) inflation ("specifically the annualized percent change in the headline CPI over the next 36 months," though the results for 24 months are similar). That turns out, in general, to be the median CPI.

Thursday, October 04, 2012

'Economic Research vs. the Blogosphere'

One more quick one. Acemoglu and Robinson respond to a recent post that appeared here (and posts by others too, their point three responds to my comments):

Economic Research vs. the Blogosphere: Our new working paper, co-authored with Thierry Verdier, received an unexpected amount of attention from the blogosphere — unfortunately, most of it negative. The paper can be found here, and some of the more interesting reactions are here, here and here . A fairly balanced and insightful summary of several of the comments can be found here.
We are surprised and intrigued. This is the first time, to the best of our knowledge, that one of our papers, a theoretical one at that, has become such a hot button issue. Upon reflection, we think this says not so much about the paper but about ideology and lack of understanding by many of what economic research is — or should be — about. So this gives us an opportunity to ruminate on these matters. ...[continue reading]...

Wednesday, October 03, 2012

The Effects of Medicaid Eligibility

This is from the NBER:
Saving Teens: Using a Policy Discontinuity to Estimate the Effects of Medicaid Eligibility, by Bruce D. Meyer, Laura R. Wherry, NBER Working Paper No. 18309, Issued in August 2012: [Open Link to Paper]: This paper uses a policy discontinuity to identify the immediate and long-term effects of public health insurance coverage during childhood. Our identification strategy exploits a unique feature of several early Medicaid expansions that extended eligibility only to children born after September 30, 1983. This feature resulted in a large discontinuity in the lifetime years of Medicaid eligibility of children at this birthdate cutoff. Those with family incomes at or just below the poverty line had close to five more years of eligibility if they were born just after the cutoff than if they were born just before. We use this discontinuity in eligibility to measure the impact of public health insurance on mortality by following cohorts of children born on either side of this cutoff from childhood through early adulthood. We examine changes in rates of mortality by the underlying causes of death, distinguishing between deaths due to internal and external causes. We also examine outcomes separately for black and white children. Our analysis shows that black children were more likely to be affected by the Medicaid expansions and gained twice the amount of eligibility as white children. We find a substantial effect of public eligibility during childhood on the later life mortality of black children at ages 15-18. The estimates indicate a 13-18 percent decrease in the internal mortality rate of black teens born after September 30, 1983. We find no evidence of an improvement in the mortality of white children under the expansions.

I'll let people connect their own dots, if they think it's appropriate, between who is helped and who is not and the current debate over Medicaid funding.

Friday, September 07, 2012

Recent Developments in CEO Compensation

Thursday, August 09, 2012

Monetary Policy and Inequality in the U.S.

I need to read this paper:

Innocent Bystanders? Monetary Policy and Inequality in the U.S., by Olivier Coibion, Yuriy Gorodnichenko, Lorenz Kueng, and John Silvia, NBER Working Paper No. 18170, Issued in June 2012 [open link]: Abstrct We study the effects and historical contribution of monetary policy shocks to consumption and income inequality in the United States since 1980. Contractionary monetary policy actions systematically increase inequality in labor earnings, total income, consumption and total expenditures. Furthermore, monetary shocks can account for a significant component of the historical cyclical variation in income and consumption inequality. Using detailed micro-level data on income and consumption, we document the different channels via which monetary policy shocks affect inequality, as well as how these channels depend on the nature of the change in monetary policy.

And, part of the conclusion:

VI Conclusion Recent events have brought both monetary policy and economic inequality to the forefront of policy issues. At odds with the common wisdom of mainstream macroeconomists, a tight link between the two has been suggested by a number of people, ranging widely across the political spectrum from Ron Paul and Austrian economists to Post-Keynesians such as James Galbraith. But while they agree on a causal link running from monetary policy actions to rising inequality in the U.S., the suggested mechanisms vary. Ron Paul and the Austrians emphasize inflationary surprises lowering real wages in the presence of sticky prices and thereby raising profits, leading to a reallocation of income from workers to capitalists. In contrast, post-Keynesians emphasize the disinflationary policies of the Federal Reserve and their disproportionate effects on employment and wages of those at the bottom end of the income distribution.
We shed new light on this question by assessing the effects of monetary policy shocks on consumption and income inequality in the U.S. Contractionary monetary policy shocks appear to have significant long-run effects on inequality, leading to higher levels of income, labor earnings, consumption and total expenditures inequality across households, in direct contrast to the directionality advocated by Ron Paul and Austrian economists. Furthermore, while monetary policy shocks cannot account for the trend increase in income inequality since the early 1980s, they appear to have nonetheless played a significant role in cyclical fluctuations in inequality and some of the longer-run movements around the trends. This is particularly true for consumption inequality, which is likely the most relevant metric from a policy point of view, and expenditure inequality after changes in the target inflation rate. To the extent that distributional considerations may have first-order welfare effects, our results point to a need for models with heterogeneity across households which are suitable for monetary policy analysis. While heterogeneous agent models with incomplete insurance markets have become increasingly common in the macroeconomics literature, little effort has, to the best of our knowledge, yet been devoted to considering their implications for monetary policy. In light of the empirical evidence pointing to non-trivial effects of monetary policy on economic inequality, this seems like an avenue worth developing further in future research. ...
Finally, the sensitivity of inequality measures to monetary policy actions points to even larger costs of the zero-bound on interest rates than is commonly identified in representative agent models. Nominal interest rates hitting the zero-bound in times when the central bank’s systematic response to economic conditions calls for negative rates is conceptually similar to the economy being subject to a prolonged period of contractionary monetary policy shocks. Given that such shocks appear to increase income and consumption inequality, our results suggest that standard representative agent models may significantly understate the welfare costs of zero-bound episodes.

Saturday, July 14, 2012

It was Mostly the Fall in Demand

Watching Amir Sufi give this paper arguing that a fall in aggregate demand rather than uncertainty, structurual change, and so forth is the major reason for the fall in employment (with the implication that replacing the lost demand can help the recovery):

What Explains High Unemployment? The Aggregate Demand Channel, by Atif Mian, University of California, Berkeley and NBER Amir Sufi University of Chicago Booth School of Business and NBER, November 2011: Abstract A drop in aggregate demand driven by shocks to household balance sheets is responsible for a large fraction of the decline in U.S. employment from 2007 to 2009. The aggregate demand channel for unemployment predicts that employment losses in the non-tradable sector are higher in high leverage U.S. counties that were most severely impacted by the balance sheet shock, while losses in the tradable sector are distributed uniformly across all counties. We find exactly this pattern from 2007 to 2009. Alternative hypotheses for job losses based on uncertainty shocks or structural unemployment related to construction do not explain our results. Using the relation between non-tradable sector job losses and demand shocks and assuming Cobb-Douglas preferences over tradable and non-tradable goods, we quantify the effect of aggregate demand channel on total employment. Our estimates suggest that the decline in aggregate demand driven by household balance sheet shocks accounts for almost 4 million of the lost jobs from 2007 to 2009, or 65% of the lost jobs in our data.

And, from the conclusion:

Alternative hypotheses such as business uncertainty and structural adjustment of the labor force related to construction are less consistent with the facts. The argument that businesses are holding back hiring because of regulatory or financial uncertainty is difficult to reconcile with the strong cross-sectional relation between household leverage levels, consumption, and employment in the non-tradable sector. This argument is also difficult to reconcile with survey evidence from small businesses and economists saying that lack of product demand has been the primary worry for businesses throughout the recession (Dennis (2010), Izzo (2011)).
There is certainly validity to the structural adjustment argument given large employment losses associated with the construction sector. However, we show that the leverage ratio of a county is a far more powerful predictor of total employment losses than either the growth in construction employment during the housing boom or the construction share of the labor force as of 2007. Further, using variation across the country in housing supply elasticity, we show that the aggregate demand hypothesis is distinct from the construction collapse view. Finally, structural adjustment theories based on construction do not explain why employment has declined sharply in industries producing tradable goods even in areas that experienced no housing boom.

Have Blog, Will Travel

I am here today.

Wednesday, June 13, 2012

Does Inequality Lead to a Financial Crisis?

Via an email, more on inequality and crises:

Does Inequality Lead to a Financial Crisis?, by Michael D. Bordo and Christopher M. Meissner: Abstract: The recent global crisis has sparked interest in the relationship between income inequality, credit booms, and financial crises. Rajan (2010) and Kumhof and Rancière (2011) propose that rising inequality led to a credit boom and eventually to a financial crisis in the US in the first decade of the 21st century as it did in the 1920s. Data from 14 advanced countries between 1920 and 2000 suggest these are not general relationships. Credit booms heighten the probability of a banking crisis, but we find no evidence that a rise in top income shares leads to credit booms. Instead, low interest rates and economic expansions are the only two robust determinants of credit booms in our data set. Anecdotal evidence from US experience in the 1920s and in the years up to 2007 and from other countries does not support the inequality, credit, crisis nexus. Rather, it points back to a familiar boom-bust pattern of declines in interest rates, strong growth, rising credit, asset price booms and crises.