Category Archive for: Academic Papers [Return to Main]

Tuesday, April 14, 2015

Secular Stagnation: The Long View

From the NBER Digest:

Secular Stagnation: The Long View, by Matt Nesvisky: Growth economists are divided on whether the U.S. is facing a period of "secular stagnation" - an extended period of slow economic growth in the coming decades. In "Secular Stagnation: The Long View" (NBER Working Paper No. 20836), Barry Eichengreen considers four factors that could contribute to a persistent period of below-potential output and slow growth: a rise in saving due to the global integration of emerging markets, a decline in the rate of population growth, an absence of attractive investment opportunities, and a drop in the relative price of investment goods. He concludes that a decline in the relative price of investment goods is the most likely contributor to an excess of saving over investment.

With regard to long-term future growth rates, a key point of debate is how to interpret, and project forward, the "Third Industrial Revolution": the computer age and the new economy it has created. Some argue that the economic impact of digital technology has largely run its course, while others maintain that we have yet to experience the full effect of computerization. In this context, Eichengreen looks at the economic consequences of the age of steam and of the age of electrification. His analysis identifies two dimensions of the economic impact: "range of applicability" and "range of adaptation."
Range of applicability refers to the number of sectors or activities to which the key innovations can be applied. Use of the steam engine of the first industrial revolution for many years was limited to the textile industry and railways, which accounted for only a relatively small fraction of economic activity. Electrification in the second industrial revolution, says Eichengreen, had a larger impact on output and productivity growth because it affected a host of manufacturing industries, many individual households, and a wide range of activities within decades of its development.
The "computer revolution" of the second half of the 20th century had a relatively limited impact on overall economic growth, Eichengreen writes, because computerization had deeply transformative effects on only a limited set of industries, including finance, wholesale and retail trade, and the production of computers themselves. This perspective suggests that the implications for output and productivity of the next wave of innovations will depend greatly on their range of applicability. Innovations such as new tools (quantum computers), materials (graphene), processes (genetic modification), robotics, and enhanced interactivity of digital devices all promise a broad range of applications.
Range of adaptation refers to how comprehensively economic activity must be reorganized before positive impacts on output and productivity occur. Eichengreen reasons that the greater the required range of adaptation, the higher the likelihood that growth may slow in the short run, as costly investments in adaptation must be made and existing technology must be disrupted.
Yet the slow productivity growth in the United States in recent years may have positive implications for the future, he writes. Many connected activities and sectors - health care, education, industrial research, and finance - are being disrupted by the latest technologies. But once a broad range of adaptations is complete, productivity growth should accelerate, he reasons. "This is not a prediction," Eichengreen concludes, "but a suggestion to look to the range of adaptation required in response to the current wave of innovations when seeking to interpret our slow rate of productivity growth and when pondering our future."

Wednesday, March 18, 2015

'Arezki, Ramey, and Sheng on News Shocks'

I was at this conference as well. This paper was very well received (it has been difficult to find evidence that news generates business cycles, in part because it's been difficult to find a "clean" shock):

Arezki, Ramey, and Sheng on news shocks: I attended the NBER EFG (economic fluctuations and growth) meeting a few weeks ago, and saw a very nice paper by Rabah Arezki, Valerie Ramey, and Liugang Sheng, "News Shocks in Open Economies: Evidence from Giant Oil Discoveries" (There were a lot of nice papers, but this one is more bloggable.)

They look at what happens to economies that discover they have a lot of oil. ... An oil discovery is a well identified "news shock."

Standard productivity shocks are a bit nebulous, and alter two things at once: they give greater productivity and hence incentive to work today and also news about more income in the future.

An oil discovery is well publicized. It incentivizes a small investment in oil drilling, but mostly is pure news of an income flow in the future. It does not affect overall labor productivity or other changes to preferences or technology.
Rabah,Valerie, and Liugang then construct a straightforward macro model of such an event. ...[describes model and results]...

Valerie, presenting the paper, was a bit discouraged. This "news shock" doesn't generate a pattern that looks like standard recessions, because GDP and employment go in the opposite direction.

I am much more encouraged. Here are macroeconomies behaving exactly as they should, in response to a shock where for once we really know what the shock is. And in response to a shock with a nice dynamic pattern, which we also really understand.

My comment was something to the effect of "this paper is much more important than you think. You match the dynamic response of economies to this large and very well identified shock with a standard, transparent and intuitive neoclassical model. Here's a list of some of the ingredients you didn't need: Sticky prices, sticky wages, money, monetary policy, (i.e. interest rates that respond via a policy rule to output and inflation or zero bounds that stop them from doing so), home bias, segmented financial markets, credit constraints, liquidity constraints, hand-to-mouth consumers, financial intermediation, liquidity spirals, fire sales, leverage, sudden stops, hot money, collateral constraints, incomplete markets, idiosyncratic risks, strange preferences including habits, nonexpected utility, ambiguity aversion, and so forth, behavioral biases, nonexpected utility, or rare disasters. If those ingredients are really there, they ought to matter for explaining the response to your shocks too. After all, there is only one economic structure, which is hit by many shocks. So your paper calls into question just how many of those ingredients are really there at all."

Thomas Phillipon, whose previous paper had a pretty masterful collection of a lot of those ingredients, quickly pointed out my overstatement. One needs not need every ingredient to understand every shock. Constraint variables are inequalities. A positive news shock may not cause credit constraints etc. to bind, while a negative shock may reveal them.

Good point. And really, the proof is in the pudding. If those ingredients are not necessary, then I should produce a model without them that produces events like 2008. But we've been debating the ingredients and shock necessary to explain 1932 for 82 years, so that approach, though correct, might take a while.

In the meantime, we can still cheer successful simple models and well identified shocks on the few occasions that they appear and fit data so nicely. Note to graduate students, this paper is a really nice example to follow for its integration of clear theory and excellent empirical work.

Tuesday, February 17, 2015

'Applying Keynes's Insights about Liquidity Preference to the Yield Curve'

Via email, a new paper from Josh R. Stillwagon, an Assistant Professor of Economics at Trinity College, appearing in  the Journal of International Financial Markets, Institutions & Money. The paper "applies some of Keynes's insights about liquidity preference to understanding term structure premia. The following is an excerpt paraphrased from the conclusion":

"This work uses survey data on traders' interest rate forecasts to test the expectations hypothesis of the term structure and finds clear evidence of a time-varying risk premium in four markets... Further, it identifies two significant factors which impact the magnitude of the risk premium. The first is overall consumer sentiment analogous to Keynes's "animal spirits"... The second factor is the level of and/or changes in the interest rate, consistent with the imperfect knowledge economics gap model [applied now to term premia]; the intuition being that the increasing skew to potential bond price movements from a fall in the interest rate [leaving more to fear than to hope as Keynes put it] causes investors to demand a greater premium. This was primarily observed in the medium-run relations of the I(2) CVAR, indicating that these effects are transitory suggesting, as Keynes argued, that what matters is not merely how far the interest rate is from zero but rather how far it is from recent levels."
This link is free for 50 days:

Wednesday, February 11, 2015

'The Long-Term Impact of Inequality on Entrepreneurship and Job Creation'

Via Chris Dillow, a new paper on inequality and economic growth:

The Long-Term Impact of Inequality on Entrepreneurship and Job Creation, by Roxana Gutiérrez-Romero and Luciana Méndez-Errico: Abstract We assess the extent to which historical levels of inequality affect the likelihood of businesses being created, surviving and of these cr eating jobs overtime. To this end, we build a pseudo-panel of entrepreneurs across 48 countries using the Global Entrepreneurship Monitor Survey over 2001-2009. We complement this pseudo-panel with historical data of income distribution and indicators of current business regulation. We find that in countries with higher levels of inequality in the 1700s and 1800s, businesses today are more likely to die young and create fewer jobs. Our evidence supports economic theories that argue initial wealth distribution influences countries’ development path, having therefore important policy implications for wealth redistribution.

Chris argues through a series of examples that such long-term effects are reasonable (things in the 1700s and 1800s mattering today), and then concludes with:

... All this suggests that, contrary to simple-minded neoclassical economics and Randian libertarianism, individuals are not and cannot be self-made men. We are instead creations of history. History is not simply a list of the misdeeds of irrelevant has-beens; it is a story of how we were made. Burke was right: society is "a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born."
One radical implication of all this is Herbert Simon's:
When we compare the poorest with the richest nations, it is hard to conclude that social capital can produce less than about 90 percent of income in wealthy societies like those of the United States or Northwestern Europe. On moral grounds, then, we could argue for a flat income tax of 90 percent to return that wealth to its real owners.

I find myself skeptical of such long-term effects, but maybe...

Thursday, January 08, 2015

'The Link between High Employment and Labor Market Fluidity'

Laurent Belsie in the NBER Digest:

The Link between High Employment and Labor Market Fluidity: U.S. labor markets lost much of their fluidity well before the onset of the Great Recession, according to Labor Market Fluidity and Economic Performance (NBER Working Paper No. 20479). The economy's ability to move jobs quickly from shrinking firms to young, growing enterprises slowed after 1990. Job reallocation rates fell by more than a quarter. After 2000, the volume of hiring and firing - known as the worker reallocation rate - also dropped. The decline was broad-based, affecting multiple industries, states, and demographic groups. The groups that suffered the most were the less-educated and the young, particularly young men.
"The loss of labor market fluidity suggests the U.S. economy became less dynamic and responsive in recent decades," authors Steven J. Davis and John Haltiwanger conclude. "Direct evidence confirms that U.S. employers became less responsive to shocks in recent decades, not that employer-level shocks became less variable."

Many factors contributed to the decline in job and worker reallocation rates, among them a shift to older companies, an aging workforce, changing business models and supply chains, the effects of the information revolution on hiring, and government policies.
About a quarter of the decline in job reallocation can be explained by the decline in the formation of young firms in the U.S. From the early 1980s and until about 2000, retail and services accounted for most of the decline in job reallocation. This occurred even though jobs shifted away from manufacturing and toward retail, where job creation is normally more dynamic and worker turnover more pronounced. One reason for the slowdown in turnover was the growing importance of big box chains in the retail sector. The authors note that other studies find that jobs are more durable in larger retail firms, and their workers are more productive than workers at the smaller stores these retailers replaced.
Fewer layoffs and more employment stability are generally considered positive trends and natural outgrowths of an aging workforce. The flip side of this equation, however, is that slower job and worker reallocation mean slower creation of new jobs, putting the jobless, including young people, at a heightened risk of long-term unemployment. These developments also slow job advancement and career changes, which are associated with boosts in wages.
This is of particular significance since 2000, when the concentration of declines in job reallocation rates and the employment share of young firms shifted from the retail sector to high-tech industries.
"These developments raise concerns about productivity growth, which has close links to creative destruction and factor reallocation in prominent theories of innovation and growth and in many empirical studies," the authors write.
Government regulation also played a role in slowing job and worker reallocation rates. In 1950, under five percent of workers required a government license to hold their job; by 2008, the percentage had risen to 29 percent. Add in government certification and the share rises to 38 percent. Wrongful discharge laws make it harder to fire employees. Federal and state laws protect classes of workers based on race, religion, gender, and other attributes. Minimum-wage laws and the heightened importance of employer-provided health insurance also make job changes less frequent.
The authors study the effects of the decline in job and worker reallocation rates on employment rates by gender, education, and age, using state-level data. They find that states with especially large declines in labor market fluidity also experienced the largest declines in employment rates, with young and less-educated persons the most adversely affected.
"...if our assessment is correct," the authors conclude, "the United States is unlikely to return to sustained high employment rates without restoring labor market fluidity."

Monday, November 10, 2014

'Honest Abe Was a Co-op Dude: How the G.O.P. Can Save Us from Despotism'

Stephen Ziliak emails:

Dear Mark:
I thought you and readers of Economist's View would like to know about an essay, "Honest Abe Was a Co-op Dude: How the G.O.P. Can Save Us from Despotism", hot off the press. Here is the abstract:
Abstract: Abraham Lincoln was a co-op dude. He had a hip neck beard, sure. Everyone knows that. But few have bothered to notice that the first Republican President of the United States was an economic democrat who put labor above capital. Labor is prior to and independent of capital, Lincoln believed, and “deserves much the higher consideration”, he told Congress in his first annual address of 1861. Capital despotism is on the rise again, threatening the stability of the economy and union. The biggest problem of democracy now is not the failure to fully extend political rights, however important. The bigger problem is economic in nature. The threat today is from a lack of economic democracy—a lack of ownership, of self-reliance, of autonomy, and of justice in the distribution of rewards and punishments at work. From the appropriation of company revenue to lack of protection against pension raids, capital despotism is rife. “The road to serfdom” has many paths to choose from, Hayek warned in his important book of 1944. But too many Americans—including economists and policymakers—are neglecting the economic path, the road to serfdom caused by a lack of economic democracy. Cooperative banks and firms can help.
And here are a few excerpts:
“Labor is prior to and independent of capital. Capital is only the fruit of labor, and could never have existed if labor had not first existed. Labor is the superior of capital, and deserves much the higher consideration.”
—Abraham Lincoln
"Economists in the know have acknowledged that the worker owned cooperative firm is the most perfect model of economic democracy and rational business organization dreamed up so far. That is true around the world, from Springfield all the way back to Shelbyville, economists who’ve examined such co-ops agree. Co-ops are more productive. And every worker is an owner."
"From the Dutch blossoming of commerce in the 1600s to the Asian Spring of the 2000s, socialists and capitalists alike have not produced, it seems, a better, more efficient and democratic form of economic production and distribution. Co-ops win. Not everyone is convinced."
"If co-ops are so great, why don’t they dominate the economy? Negligence and ignorance, more than any other possible cause, it would seem.
 For example, the infamous “socialist calculation debate” in economics dragged on for two decades before a single word was said by either side, from Lange and Lerner to von Mises and Hayek, about the nature of the firm. Nary a peep from economists about how or even why firms choose to organize into production units of a certain scale, large or small. Ronald Coase’s article on “The Nature of the Firm” (1937) was good enough to fetch him a Nobel Prize. But Coase did not bring as much clarity to the debate as most economists believe.
 Coase was vague and conventional to point of embarrassment. He made straw man assumptions about the firm being a hierarchical-capitalistic entity. Coase’s firm, though more “tractable” and “realistic” than previous notions, is assumed to be run by a “master” or masters, by capitalists who seek to maximize profit by bossing around “servants”—that is, wage earners possessing little autonomy, little or no ownership, and no voting rights on capital, their sole purpose being assumed to serve the “masters” of capital.
 Said Coase, “If a workman moves from department Y to department X, he does not go because of a change in relative prices, but because he is ordered to do so.” But if Coase (himself a lovely man in person) would have taken a closer look at the real world, he could have found cooperative firms succeeding in stark contrast to the anti-democratic firms of his imagination."
Stephen T. Ziliak

Thursday, October 23, 2014

'The Effects of a Money-Financed Fiscal Stimulus'

Jordi Galí:

The Effects of a Money-Financed Fiscal Stimulus, by Jordi Galí, September 2014: Abstract I analyze the effects of an increase in government purchases financed entirely through seignorage, in both a classical and a New Keynesian framework, and compare them with those resulting from a more conventional debt-financed stimulus. My findings point to the importance of nominal rigidities in shaping those effects. Under a realistic calibration of such rigidities, a money-financed fiscal stimulus is shown to have very strong effects on economic activity, with relatively mild inflationary consequences. If the steady state is sufficiently inefficient, an increase in government purchases may increase welfare even if such spending is wasteful.

Thursday, October 02, 2014

Is Blogging or Tweeting about Research Papers Worth It?

Via the Lindau blog:

The verdict: is blogging or tweeting about research papers worth it?, by Melissa Terras: Eager to find out what impact blogging and social media could have on the dissemination of her work, Melissa Terras took all of her academic research, including papers that have been available online for years, to the web and found that her audience responded with a huge leap in interest...

Just one quick note. This is what happened when one person started promoting her research through social media. If everyone does it, and there is much more competition for eyeballs, the results might differ.

Monday, September 29, 2014

'Reconstructing Macroeconomic Theory to Manage Economic Policy'

New paper from Joseph Stiglitz:

Reconstructing Macroeconomic Theory to Manage Economic Policy, by Joseph E. Stiglitz, NBER Working Paper No. 20517, September 2014 NBER: Macroeconomics has not done well in recent years: The standard models didn't predict the Great Recession; and even said it couldn't happen. After the bubble burst, the models did not predict the full consequences.
The paper traces the failures to the attempts, beginning in the 1970s, to reconcile macro and microeconomics, by making the former adopt the standard competitive micro-models that were under attack even then, from theories of imperfect and asymmetric information, game theory, and behavioral economics.
The paper argues that any theory of deep downturns has to answer these questions: What is the source of the disturbances? Why do seemingly small shocks have such large effects? Why do deep downturns last so long? Why is there such persistence, when we have the same human, physical, and natural resources today as we had before the crisis?
The paper presents a variety of hypotheses which provide answers to these questions, and argues that models based on these alternative assumptions have markedly different policy implications, including large multipliers. It explains why the apparent liquidity trap today is markedly different from that envisioned by Keynes in the Great Depression, and why the Zero Lower Bound is not the central impediment to the effectiveness of monetary policy in restoring the economy to full employment.

[I couldn't find an open link.]

Monday, July 28, 2014

'Presidents and the U.S. Economy: An Econometric Exploration'

Alan Blinder and Mark Watson:

Presidents and the U.S. Economy: An Econometric Exploration, by Alan S. Blinder and Mark W. Watson, NBER Working Paper No. 20324 [open link]: The U.S. economy has grown faster—and scored higher on many other macroeconomic metrics—when the President of the United States is a Democrat rather than a Republican. For many measures, including real GDP growth (on which we concentrate), the performance gap is both large and statistically significant, despite the fact that postwar history includes only 16 complete presidential terms. This paper asks why. The answer is not found in technical time series matters (such as differential trends or mean reversion), nor in systematically more expansionary monetary or fiscal policy under Democrats. Rather, it appears that the Democratic edge stems mainly from more benign oil shocks, superior TFP performance, a more favorable international environment, and perhaps more optimistic consumer expectations about the near-term future. Many other potential explanations are examined but fail to explain the partisan growth gap.

Monday, July 14, 2014

'Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve'

Via email, a comment on my comments about the difficulty of settling questions about the Phillips curve empirically:

Dear Professor Thoma,
I saw your recent post on the difficulty of empirically testing the Phillips Curve, and I just wanted to alert you to a survey paper on this topic that I wrote with Sophocles Mavroeidis and Jim Stock: "Empirical Evidence on Inflation Expectations in the New Keynesian Phillips Curve". It was published in the Journal of Economic Literature earlier this year (ungated working paper).
In the paper we estimate a vast number of specifications of the New Keynesian Phillips Curve (NKPC) on a common U.S. data set. The specification choices include the data series, inflation lag length, sample period, estimator, and so on. A subset of the specifications amount to traditional backward-looking (adaptive expectation) Phillips Curves. We are particularly interested in two key parameters: the extent to which price expectations are forward-looking, and the slope of the curve (how responsive inflation is to real economic activity).
Our meta-analysis finds that essentially any desired parameter estimates can be generated by some reasonable-sounding specification. That is, estimation of the NKPC is subject to enormous specification uncertainty. This is consistent with the range of estimates reported in the literature. Even if one were to somehow decide on a given specification, the uncertainty surrounding the parameter estimates is typically large. We give theoretical explanations for these empirical findings in the paper. To be clear: Our results do not reject the validity of the NKPC (or more generally, the presence of a short-run inflation/output trade-off), but traditional aggregate time series analysis is just not very informative about the nature of inflation dynamics.
Kind regards,
Mikkel Plagborg-Moller
PhD candidate in economics, Harvard University

Monday, June 23, 2014

Bank Failure, Relationship Lending, and Local Economic Performance

John Kandrac (a former Ph.D. student):

Bank Failure, Relationship Lending, and Local Economic Performance, by John Kandrac, Board of Governors of the Federal Reserve System, Finance and Economics Discussion Series: Abstract Whether bank failures have adverse effects on local economies is an important question for which there is conflicting and relatively scarce evidence. In this study, I use county-level data to examine the effect of bank failures and resolutions on local economies. Using quasi-experimental techniques as well as cross-sectional variation in bank failures, I show that recent bank failures lead to lower income and compensation growth, higher poverty rates, and lower employment. Additionally, I find that the structure of bank resolution appears to be important. Resolutions that include loss-sharing agreements tend to be less deleterious to local economies, supporting the notion that the importance of bank failure to local economies stems from banking and credit relationships. Finally, I show that markets with more inter-bank competition are more strongly affected by bank failure. [Download Full text]

Wednesday, May 28, 2014

'Unemployment Insurance and Disability Insurance in the Great Recession'

From the NBER Digest:

Unemployment Insurance and Disability Insurance in the Great Recession: At the end of 2012, 8.8 million American adults were receiving Social Security Disability Insurance (SSDI) benefits. The share of the American public receiving SSDI has more than doubled since 1990. This rapid growth has prompted concerns about SSDI's sustainability: recent projections suggest that the SSDI trust fund will be exhausted in 2016.
SSDI recipients tend to remain in the program, and out of the labor market, from the time they are approved for benefits until they reach retirement age. This means that if unemployed individuals turn to disability insurance as a source of benefits when they exhaust their unemployment insurance (UI), the long-term program costs can be substantial. Some have suggested that the savings from avoided SSDI cases could help to finance the cost of extending UI benefits, but little is known about the interaction between SSDI and UI.
In Unemployment Insurance and Disability Insurance in the Great Recession, (NBER Working Paper No. 19672), Andreas Mueller, Jesse Rothstein, and Till von Wachter use data from the last decade to investigate the relationship between UI exhaustion and SSDI applications. They take advantage of the variability of UI benefit durations during the recent economic downturn. The duration of these benefits was as long as 99 weeks in 2009, remained protracted for several years, then was shortened substantially in 2012. The authors focus on the uneven extension of UI benefits during and after the Great Recession to isolate variation in the duration of these benefits that is not confounded by variation in economic conditions more broadly.
The authors find very little interaction between UI benefit eligibility and SSDI applications, and conclude that SSDI applications do not appear to respond to UI exhaustion. While the authors cannot rule out small effects, they conclude that SSDI applications do not respond strongly enough to contribute meaningfully to a cost-benefit analysis of UI extensions or to account for the cyclical behavior of SSDI applications.
The authors suggest that the tendency for the number of SSDI applications to grow when the economy is weak may reflect variation in the potential reemployment wages of displaced workers, or changes in the employment opportunities of the marginally disabled that influence the evaluation of an SSDI applicant's employability. These channels are not linked to the generosity or duration of UI benefits, and they imply that more stringent functional capacity reviews of SSDI applicants may not reduce recession-induced SSDI claims if these claims reflect examiners' judgments that the applicants are truly not employable in the existing labor market.

Monday, April 28, 2014

New Research in Economics: Central Banking For All: A Modest Case for Radical Reform

Via Nicholas Gruen:

Central Banking For All: A modest Case for Radical Reform (Download): This paper offers a  radical option for banking reform: government should offer central banking services not just to commercial banks, but directly to citizens.
Key Findings
Nicholas Gruen argues the UK and other countries need radical banking reform This can be achieved by a simple change: giving ordinary people the same right to use central banks’ services as big commercial banks have. Though they enjoy high margins and/or fees, banks add little value to ‘commodity services’ like customer accounts and highly-collateralised mortgages like older ones that are partially paid off which are basically riskless.
There’s widespread agreement that the UK needs better banks and a better deal for bank customers. This report by Nicholas Gruen, economist and founding chairman of Kaggle and The Australian Centre for Social Innovation, proposes a simple but radical solution.
Gruen argues that in the age of the internet, the Bank of England can now extend the services it currently offers only to banks to everyone in the UK. In particular, it should offer (for instance through National Savings & Investments) simple, cheap deposit and savings accounts to all, paying interest at Bank Rate. Second, it should offer to guarantee any well-collateralised mortgage (for instance a residential mortgage for less than 60 per cent of the value of the collateral).
At the moment, commercial banks provide these services at a cost (both in terms of worse rates, fees with their margins inflated by their funder’s knowledge that they are implicitly government guaranteed).
By cutting out the middle-man in the form of the banks, Gruen argues customers would get a better deal, and private competitors providing finance could focus on the provision of finance where the efficient pricing of risk is essential – most particularly residential finance above 60 per cent of the value of collateral.
Policy Recommendations
The government should allow the Bank of England to provide central banking services directly to anyone who wants them, not just banks. The Bank should offer to fund or guarantee any well-collateralised mortgage (e.g. with less than 60 per cent of the property value outstanding) The Bank should, through National Savings and Investments, offer simple deposit and savings accounts to anyone who wants them with no upper limits, paying Bank Rate of interest.

Tuesday, April 08, 2014

A Model of Secular Stagnation

Gauti Eggertson and Neil Mehotra have an interesting new paper:

A Model of Secular Stagnation, by Gauti Eggertsson and Neil Mehrotra: 1 Introduction During the closing phase of the Great Depression in 1938, the President of the American Economic Association, Alvin Hansen, delivered a disturbing message in his Presidential Address to the Association (see Hansen ( 1939 )). He suggested that the Great Depression might just be the start of a new era of ongoing unemployment and economic stagnation without any natural force towards full employment. This idea was termed the ”secular stagnation” hypothesis. One of the main driving forces of secular stagnation, according to Hansen, was a decline in the population birth rate and an oversupply of savings that was suppressing aggregate demand. Soon after Hansen’s address, the Second World War led to a massive increase in government spending effectively end- ing any concern of insufficient demand. Moreover, the baby boom following WWII drastically changed the population dynamics in the US, thus effectively erasing the problem of excess sav- ings of an aging population that was of principal importance in his secular stagnation hypothesis.
Recently Hansen’s secular stagnation hypothesis has gained increased attention. One obvious motivation is the Japanese malaise that has by now lasted two decades and has many of the same symptoms as the U.S. Great Depression - namely dwindling population growth, a nominal interest rate at zero, and subpar GDP growth. Another reason for renewed interest is that even if the financial panic of 2008 was contained, growth remains weak in the United States and unemployment high. Most prominently, Lawrence Summers raised the prospect that the crisis of 2008 may have ushered in the beginning of secular stagnation in the United States in much the same way as suggested by Alvin Hansen in 1938. Summers suggests that this episode of low demand may even have started well before 2008 but was masked by the housing bubble before the onset of the crisis of 2008. In Summers’ words, we may have found ourselves in a situation in which the natural rate of interest - the short-term real interest rate consistent with full employment - is permanently negative (see Summers ( 2013 )). And this, according to Summers, has profound implications for the conduct of monetary, fiscal and financial stability policy today.
Despite the prominence of Summers’ discussion of the secular stagnation hypothesis and a flurry of commentary that followed it (see e.g. Krugman ( 2013 ), Taylor ( 2014 ), Delong ( 2014 ) for a few examples), there has not, to the best of our knowledge, been any attempt to formally model this idea, i.e., to write down an explicit model in which unemployment is high for an indefinite amount of time due to a permanent drop in the natural rate of interest. The goal of this paper is to fill this gap. ...[read more]...

In the abstract, they note the policy prescriptions for secular stagnation:

In contrast to earlier work on deleveraging, our model does not feature a strong self-correcting force back to full employment in the long-run, absent policy actions. Successful policy actions include, among others, a permanent increase in inflation and a permanent increase in government spending. We also establish conditions under which an income redistribution can increase demand. Policies such as committing to keep nominal interest rates low or temporary government spending, however, are less powerful than in models with temporary slumps. Our model sheds light on the long persistence of the Japanese crisis, the Great Depression, and the slow recovery out of the Great Recession.

Tuesday, March 04, 2014

'Will MOOCs Lead to the Democratisation of education?'

Some theoretical results on MOOCs:

Will MOOCs lead to the democratisation of education?, by Joshua Gans: With all the recent discussion of how hard it is for journalists to read academic articles, I thought I’d provide a little service here and ‘translate’ the recent NBER working paper by Daron Acemoglu, David Laibson and John List, “Equalizing Superstars” for a general audience. The paper contains a ‘light’ general equilibrium model that may be difficult for some to parse.
The paper is interested in what the effect of MOOCs or, in general, web-based teaching options would be on educational outcomes around the world, the distribution of those outcomes and the wages of teachers. ...

Thursday, February 13, 2014

Debt and Growth: There is No Magic Threshold

New paper from the IMF:

Debt and Growth: Is There a Magic Threshold?, by Andrea Pescatori ; Damiano Sandri ; John Simon [Free Full text]: Summary: Using a novel empirical approach and an extensive dataset developed by the Fiscal Affairs Department of the IMF, we find no evidence of any particular debt threshold above which medium-term growth prospects are dramatically compromised. Furthermore, we find the debt trajectory can be as important as the debt level in understanding future growth prospects, since countries with high but declining debt appear to grow equally as fast as countries with lower debt. Notwithstanding this, we find some evidence that higher debt is associated with a higher degree of output volatility.

[Via Bruce Bartlett on Twitter.]

Wednesday, February 12, 2014

'Is Increased Price Flexibility Stabilizing? Redux'

I need to read this:

Is Increased Price Flexibility Stabilizing?, by Redux Saroj Bhattarai, Gauti Eggertsson, and Raphael Schoenle, NBER Working Paper No. 19886 February 2014 [Open Link]: Abstract We study the implications of increased price flexibility on output volatility. In a simple DSGE model, we show analytically that more flexible prices always amplify output volatility for supply shocks and also amplify output volatility for demand shocks if monetary policy does not respond strongly to inflation. More flexible prices often reduce welfare, even under optimal monetary policy if full efficiency cannot be attained. We estimate a medium-scale DSGE model using post-WWII U.S. data. In a counterfactual experiment we find that if prices and wages are fully flexible, the standard deviation of annualized output growth more than doubles.

Friday, February 07, 2014

Latest from the Journal of Economic Perspectives

A few of the articles from the latest Journal of Economic Perspectives:

When Ideas Trump Interests: Preferences, Worldviews, and Policy Innovations, by Dani Rodrik: Ideas are strangely absent from modern models of political economy. In most prevailing theories of policy choice, the dominant role is instead played by "vested interests"—elites, lobbies, and rent-seeking groups which get their way at the expense of the general public. Any model of political economy in which organized interests do not figure prominently is likely to remain vacuous and incomplete. But it does not follow from this that interests are the ultimate determinant of political outcomes. Here I will challenge the notion that there is a well-defined mapping from "interests" to outcomes. This mapping depends on many unstated assumptions about the ideas that political agents have about: 1) what they are maximizing, 2) how the world works, and 3) the set of tools they have at their disposal to further their interests. Importantly, these ideas are subject to both manipulation and innovation, making them part of the political game. There is, in fact, a direct parallel, as I will show, between inventive activity in technology, which economists now routinely make endogenous in their models, and investment in persuasion and policy innovation in the political arena. I focus specifically on models professing to explain economic inefficiency and argue that outcomes in such models are determined as much by the ideas that elites are presumed to have on feasible strategies as by vested interests themselves. A corollary is that new ideas about policy—or policy entrepreneurship—can exert an independent effect on equilibrium outcomes even in the absence of changes in the configuration of political power. I conclude by discussing the sources of new ideas. Full-Text Access | Supplementary Materials

An Economist's Guide to Visualizing Data, by Jonathan A. Schwabish: Once upon a time, a picture was worth a thousand words. But with online news, blogs, and social media, a good picture can now be worth so much more. Economists who want to disseminate their research, both inside and outside the seminar room, should invest some time in thinking about how to construct compelling and effective graphics. Full-Text Access | Supplementary Materials

Wednesday, January 01, 2014

'Minimum Wages and the Distribution of Family Incomes'

Arin Dube new working paper entitled "Minimum wages and the distribution of family incomes."

Here is his short summary:

The paper tries to make sense of the existing literature, while providing new (and I would argue better) answers to old questions such as the effect on the poverty rate, and also conduct a more full-fledged distributional analysis of minimum wages and family incomes using newer tools.

Here is the abstract:

I use data from the March Current Population Survey between 1990 and 2012 to evaluate the effect of minimum wages on the distribution of family incomes for non-elderly individuals. I find robust evidence that higher minimum wages moderately reduce the share of individuals with incomes below 50, 75 and 100 percent of the federal poverty line. The elasticity of the poverty rate with respect to the minimum wage ranges between -0.12 and -0.37 across specifications with alternative forms of time-varying controls and lagged effects; most of these estimates are statistically significant at conventional levels. For my preferred (most saturated) specification, the poverty rate elasticity is -0.24, and rises in magnitude to -0.36 when accounting for lags. I also use recentered influence function regressions to estimate unconditional quantile partial effects of minimum wages on family incomes. The estimated minimum wage elasticities are sizable for the bottom quantiles of the equivalized family income distribution. The clearest effects are found at the 10th and 15th quantiles, where estimates from most specifications are statistically significant; minimum wage elasticities for these two family income quantiles range between 0.10 and 0.43 depending on control sets and lags. I also show that the canonical two-way fixed effects model---used most often in the literature---insufficiently accounts for the spatial heterogeneity in minimum wage policies, and fails a number of key falsification tests. Accounting for time-varying regional effects, and state-specific recession effects both suggest a greater impact of the policy on family incomes and poverty, while the addition of state-specific trends does not appear to substantially alter the estimates. I also provide a quantitative summary of the literature, bringing together nearly all existing elasticities of the poverty rate with respect to minimum wages from 12 different papers. The range of the estimates in this paper is broadly consistent with most existing evidence, including for some key subgroups, but previous studies often suffer from limitations including insufficiently long sample periods and inadequate controls for state-level heterogeneity, which tend to produce imprecise and erratic results.

Update: Here is the key graph from the paper (click for larger version):


Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:


This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Friday, November 15, 2013

'Infant Mortality and the President’s Party'

Chris Blattman:

Do Republican Presidents kill babies?:

Across all nine presidential administrations, infant mortality rates were below trend when the President was a Democrat and above trend when the President was a Republican.

This was true for overall, neonatal, and postneonatal mortality, with effects larger for postneonatal compared to neonatal mortality rates.

Regression estimates show that, relative to trend, Republican administrations were characterized by infant mortality rates that were, on average, three percent higher than Democratic administrations.

In proportional terms, effect size is similar for US whites and blacks. US black rates are more than twice as high as white, implying substantially larger absolute effects for blacks.

A new paper titled, “US Infant Mortality and the President’s Party“. I like my title better.

The abstract also says:

Conclusions: We found a robust, quantitatively important association between net of trend US infant mortality rates and the party affiliation of the president. There may be overlooked ways by which macro-dynamics of policy impact micro-dynamics of physiology, suggesting the political system is a component of the underlying mechanism generating health inequality in the United States.

Monday, November 11, 2013

'Why ask Why? Forward Causal Inference and Reverse Causal Questions'

Andrew Gelman and Guido Imbens posted this at the NBER to try to get the attention of economists:

Why ask Why? Forward Causal Inference and Reverse Causal Questions, by Andrew Gelman and Guido Imbens, NBER Working Paper No. 19614, November 2013 NBER: The statistical and econometrics literature on causality is more focused on "effects of causes" than on "causes of effects." That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to define the causes of any particular outcome. This has led some researchers to dismiss the search for causes as "cocktail party chatter" that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.

Thursday, November 07, 2013

New Research in Economics: The Effect of Household Debt Deleveraging on Unemployment – Evidence from Spanish Provinces

Sebastian Jauch and Sebastian Watzka "find that around 1/3 of the increase in Spanish unemployment following the housing boom is due to household mortgage debt deleveraging. This is strongly at odds with the EC estimates of the NAIRU"

Download Jauch-Watzka:

The Effect of Household Debt Deleveraging on Unemployment – Evidence from Spanish Provinces, by Sebastian Jauch and Sebastian Watzka: Introduction Spanish unemployment has risen from a low of 7% in 2007 to its height of 26% in 2012. Around 6.1 mil. people are currently unemployed in Spain. Unemployment rates are particularly high for young people with every second young Spaniard looking for a job. Given the enormous economic, psychological and social problems that are related with high and long-lasting unemployment, it is of the utmost importance to study the causes of the high increase in Spanish unemployment.
In this paper we therefore take a close look at one of these causes and study the extent to which the increase in Spanish unemployment is due to the effects of Spanish household debt deleveraging. Using household mortgage debt data for 50 Spanish provinces together with detailed data on sectoral provincial unemployment data we estimate that over the period 2007-10 around 1/3 of the newly unemployed, or a total of approximately 860,000 people, have become unemployed due to mortgage debt-related aggregate demand reasons.
The underlying transmission mechanism investigated in this study begins with a deleveraging shock to the balance sheets of individual households. The shock for households is greater if they must direct more effort to restructure their balance sheets. The more debt a household has accumulated relative to its income before the shock occurred, the more deleveraging the household must arrange by increasing savings and reducing spending after the shock to restructure its balance sheet. Given the elasticity of employment with respect to demand, these deleveraging needs will increase unemployment. ...

Tuesday, November 05, 2013

Aggregate Supply: Recent Developments and Implications for the Conduct of Monetary Policy

New paper on how the recession has damaged the economy from Dave Reifschneider, William Wascher, and David Wilcox of the Federal Reserve Board:

Aggregate Supply in the United States: Recent Developments and Implications for the Conduct of Monetary Policy, by Dave Reifschneider, William Wascher, and David Wilcox: Abstract: The recent financial crisis and ensuing recession appear to have put the productive capacity of the economy on a lower and shallower trajectory than the one that seemed to be in place prior to 2007. Using a version of an unobserved components model introduced by Fleischman and Roberts (2011), we estimate that potential GDP is currently about 7 percent below the trajectory it appeared to be on prior to 2007. We also examine the recent performance of the labor market. While the available indicators are still inconclusive, some indicators suggest that hysteresis should be a more present concern now than it has been during previous periods of economic recovery in the United States. We go on to argue that a significant portion of the recent damage to the supply side of the economy plausibly was endogenous to the weakness in aggregate demand—contrary to the conventional view that policymakers must simply accommodate themselves to aggregate supply conditions. Endogeneity of supply with respect to demand provides a strong motivation for a vigorous policy response to a weakening in aggregate demand, and we present optimal-control simulations showing how monetary policy might respond to such endogeneity in the absence of other considerations. We then discuss how other considerations--such as increased risks of financial instability or inflation instability--could cause policymakers to exercise restraint in their response to cyclical weakness.

See here too.

Saturday, November 02, 2013

'Improving GDP Measurement: A Measurement-Error Perspective'

Interesting work on obtaining blended estimates of GDP:

Improving GDP Measurement: A Measurement-Error Perspective, by Boragan Aruoba, Francis X. Diebold, Jeremy Nalewaik, Frank Schorfheide, and Dongho Song, First Draft, January 2013 This Draft, May 2, 2013: Abstract: We provide a new and superior measure of U.S. GDP, obtained by applying optimal signal-extraction techniques to the (noisy) expenditure-side and income-side estimates. Its properties -- particularly as regards serial correlation -- differ markedly from those of the standard expenditure-side measure and lead to substantially-revised views regarding the properties of GDP.
1 Introduction Aggregate real output is surely the most fundamental and important concept in macroeconomic theory. Surprisingly, however, significant uncertainty still surrounds its measurement. In the U.S., in particular, two often-divergent GDP estimates exist, a widely-used expenditure-side version, GDPE, and a much less widely-used income-side version, GDPI.1 Nalewaik (2010) and Fixler and Nalewaik (2009) make clear that, at the very least, GDPI deserves serious attention and may even have properties in certain respects superior to those of GDPE. That is, if forced to choose between GDP E and GDPI, a surprisingly strong case exists for GDPI. But of course one is not forced to choose between GDPE and GDPI, and a GDP estimate based on both GDPE and GDPI may be superior to either one alone. In this paper we propose and implement a framework for obtaining such a blended estimate. ...

The main result on the serial correlation properties is that the blended measure of "GDPM is highly serially correlated across all specifications (ρ≈.6), much more so than the current 'consensus' based on GDPE (ρ≈.3)."

Friday, October 25, 2013

Are Sticky Prices Costly? Evidence From The Stock Market

Another interesting paper that supports the New Keynesian sticky price assumption:

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and and Michael Weber, NBER: Abstract We show that after monetary policy announcements, the conditional volatility of stock market returns rises more for rms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs -- broadly defined to include physical costs of price adjustment, informational frictions, etc. -- are an important factor for nominal price rigidity. We also show that our empirical results are qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our framework is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Fiscal Multipliers: Liquidity Traps and Currency Unions

First paper at the conference is interesting:

Fiscal Multipliers: Liquidity Traps and Currency Unions, by Emmanuel Farhi and Iván Werning, NBER: We provide explicit solutions for government spending multipliers during a liquidity trap and within a fixed exchange regime using standard closed and open-economy models. We confirm the potential for large multipliers during liquidity traps. For a currency union, we show that self-financed multipliers are small, always below unity. However, outside transfers or windfalls can generate larger responses in out- put, whether or not they are spent by the government. Our solutions are relevant for local and national multipliers, providing insight into the economic mechanisms at work as well as the testable implications of these models.

Discussant: The "Keynesian demand effect can potentially be very large." Here is a bit of the introduction that explains further:

1 Introduction Economists generally agree that macroeconomic stabilization should be handled first and foremost by monetary policy. Yet monetary policy can run into constraints that impair its effectiveness. For example, the economy may find itself in a liquidity trap, where interest rates hit zero, preventing further reductions in the interest rate. Similarly, countries that belong to currency unions, or states within a country, do not have the option of an independent monetary policy. Some economists advocate for fiscal policy to fill this void, increasing government spending to stimulate the economy. Others disagree, and the issue remains deeply controversial, as evidenced by vigorous debates on the magnitude of fiscal multipliers. No doubt, this situation stems partly from the lack of definitive empirical evidence, but, in our view, the absence of clear theoretical benchmarks also plays an important role. Although various recent contributions have substantially furthered our understanding, to date, the implications of standard macroeconomic models have not been fully worked out. This is the goal of our paper.
We solve for the response of the economy to changes in the path for government spending during liquidity traps or within currency unions using standard closed and open-economy monetary models. ...
Our results confirm that fiscal policy can be especially potent during a liquidity trap. The multiplier for output is greater than one. The mechanism for this result is that government spending promotes inflation. With fixed nominal interest rates, this reduces real interest rates which increases current spending. The increase in consumption in turn leads to more inflation, creating a feedback loop. The fiscal multiplier is increasing in the degree of price flexibility, which is intuitive given that the mechanism relies on the response of inflation. We show that backloading spending leads to larger effects; the rationale is that inflation then has more time to affect spending decisions.
In a currency union, by contrast, government spending is less effective at increasing output. We show that consumption is depressed, so that the multiplier is less than one. Moreover, price flexibility diminishes the effectiveness of spending, instead of increasing it. We explain this result using a simple argument that illustrates its robustness. Government spending leads to inflation in domestically produced goods and this loss in competitiveness depresses private spending. Applied to current debates in Europe, this highlights a possible tradeoff: putting off fiscal consolidation may postpone internal devaluations that actually help reactivate private spending.
It may seem surprising that fiscal multipliers are necessarily less than one whenever the exchange rate is fixed, because this contrasts sharply with the effects during liquidity traps. Our analytical approach allows us to uncover the crucial difference in monetary policy: although a fixed exchange rate implies a fixed nominal interest rate, the converse is not true. Indeed, we prove that the liquidity trap analysis implicitly combines a shock to government spending with a one-off devaluation. The positive response of consumption relies entirely on this devaluation. A currency union rules out such devaluations, explaining the negative response of consumption.
In the context of a currency union, our results uncover the importance of transfers from the outside, from other countries or regions. In the short run, when prices haven’t fully adjusted, positive transfers from the rest of the world increase the demand for home goods, stimulating output. We compute “transfer multipliers” that capture the response of the economy to transfers from the outside. We show that these multipliers may be large and depend crucially on the degree of openness of the domestic economy.
Outside transfers are often tied to government spending. In the United States federal military spending allocated to a particular state is financed by the country as a whole. The same is true for exogenous differences in stimulus payments, due to idiosyncratic provisions in the law. Likewise, idiosyncratic portfolio returns accruing to a particular state’s coffers represent a windfall for this state against the rest. When changes in spending are financed by such outside transfers, the associated multipliers are a combination of self-financed multipliers and transfer multipliers. As a result, multipliers may be substantially larger than one.
Finally, we explore non-Ricardian effects from fiscal policy by introducing hand-to- mouth consumers. We think of this as a tractable way of modeling liquidity constraints. In both in a liquidity trap and in a currency union, government spending now has an additional stimulative effect. It increases the income and consumption of hand-to-mouth agents. This effects is largest when spending is deficit financed; indeed, the effects may in some cases depend entirely on deficits, not spending per se. Overall, although hand to mouth consumers introduce an additional effect most of our conclusions, such as the comparison of fiscal multipliers in a liquidity trap and a currency union, are unaffected. ...

Thursday, October 10, 2013

Have Blog, Will Travel

I am here for the next two days:

38th Annual Federal Reserve Bank of St. Louis Fall Conference

Thursday, October 10, 2013

8: 45 – 9:00 am Opening Remarks
James Bullard, President, Federal Reserve Bank of St. Louis

Session I - Financial Markets 1

9:00 – 10:15 am "Trade Dynamics in the Market for Federal Funds"
Presenter:  Ricardo Lagos, New York University
Coauthor:  Gara Afonso, Federal Reserve Bank of New York
Discussant:  Huberto Ennis, Federal Reserve Bank of Richmond

10:45 am – 12:00 pm "Banks' Risk Exposures"
Presenter:  Martin Schneider , Stanford University
Coauthors:  Juliane Begenau, Stanford University and Monika Piazzesi, Stanford University
Discussant:  Hanno Lustig, University of California-Los Angeles

Session II: Monetary Policy and Macro Dynamics

1:00 – 2:15 pm "Unemployment and Business Cycles"
Presenter:  Martin S. Eichenbaum, Northwestern University
Coauthors:  Lawrence J. Christiano, Northwestern University and Mathias Trabandt, Board of Governors of the Federal Reserve System
Discussant:  Jaroslav Borovicka, New York University

2:45 – 4:00 pm "Conventional and Unconventional Monetary Policy in a Model with Endogenous Collateral Constraints"
Presenter:  Michael Woodford, Columbia University
Coauthors:  Aloísio Araújo, Getulio Vargas Foundation and Susan Schommer, Instituto Nacional de Matemática Pura e Aplicada
Discussant:  Stephen Williamson, Washington University

4:00 – 5:15 pm "Leverage Restrictions in a Business Cycle Model"
Presenter:  Lawrence J. Christiano, Northwestern University
Coauthor:  Daisuke Ikeda, Bank of Japan
Discussant:  Benjamin Moll, Princeton UniversityFriday, October 11, 2013

Friday, October 11, 2013

Session III: Financial Markets 2

9:00 – 10:15 am "Measuring the Financial Soundness of U.S. Firms, 1926—2012"
Presenter:  Andrew G. Atkeson, University of California-Los Angeles
Coauthor:  Andrea L. Eisfeldt, University of California-Los Angeles and Pierre-Olivier Weill, University of California-Los Angeles
Discussant:  Gian Luca Clementi, New York University

10:45 am – 12:00 pm "The I Theory of Money"
Presenter:  Markus K. Brunnermeier, Princeton University
Coauthor:  Yuliy Sannikov, Princeton University
Discussant:  Ed Nosal, Federal Reserve Bank of Chicago

Session IV: Households Lifecycle Behavior

1:00 – 2:15 pm "Is There 'Too Much' Inequality in Health Spending Across Income Groups?"
Presenter:  Larry E. Jones, University of Minnesota
Coauthors:  Laurence Ales, Carnegie Mellon University and Roozbeh Hosseini , Arizona State University
Discussant:  Selahattin İmrohoroğlu, University of Southern California

2:15 –3:30 pm "Retirement, Home Production and Labor Supply Elasticities"
Presenter:  Richard Rogerson, Princeton University
Coauthor:  Johanna Wallenius, Stockholm School of Economics
Discussant:  Nancy Stokey, University of Chicago

Monday, October 07, 2013

'Uncertainty Shocks are Aggregate Demand Shocks'

More new research:

Uncertainty Shocks are Aggregate Demand Shocks, by Sylvain Leduc and Zheng Liu: Abstract We present empirical evidence and a theoretical argument that uncertainty shocks act like a negative aggregate demand shock, which raises unemployment and lowers inflation. We measure uncertainty using survey data from the United States and the United Kingdom. We estimate the macroeconomic effects of uncertainty shocks in a vector autoregression (VAR) model, exploiting the relative timing of the surveys and macroeconomic data releases for identification. Our estimation reveals that uncertainty shocks accounted for at least one percentage point increases in unemployment in the Great Recession and recovery, but did not contribute much to the 1981-82 recession. We present a DSGE model to show that, to understand the observed macroeconomic effects of uncertainty shocks, it is essential to have both labor search frictions and nominal rigidities.

New Research in Economics: The Return and Risk of Pursuing a BA

This is from Frank Levy at MIT:

I am attaching a paper co-authored with two former students that uses California higher ed data to make stylized calculations of the return and risk of pursuing a BA. The paper makes two main points.
Most studies of the rate of return to college use a best-case scenario in which students earn a degree with certainty in four years. More realistic calculations that account for students who take more than four years and students who drop out without a degree, etc. result in an average rate of return that is lower than it was in 2000 but still exceeds the interest rate on unsubsidized Stafford student loans – i.e. college remains a good investment by the normal criteria.
Most studies present an average rate of return without considering the investment’s risk. Over the last decade, rising tuition and deteriorating earnings for new college graduates (particularly at the bottom of the distribution) have increased the risk of pursuing a BA – e.g. the risk that a graduate at age 30 will have student loan payments that exceed 15% of their income. This growing risk is one explanation for increased  skepticism about the value of a college degree despite the apparently high rate of return. It also underlines the importance of students becoming aware of the government’s income contingent loan repayment plans.
The paper is posted on SSRN.

Friday, October 04, 2013

Minimum Wages and Job Growth: a Statistical Artifact

Arin Dube:

Minimum Wages and Job Growth: a Statistical Artifact, by Arin Dube: In a recent paper, Jonathan Meer and Jeremy West argue that it takes time for employment to adjust in response to a minimum wage hike, making it more difficult to detect an impact by looking at employment levels. In contrast, they argue, impact is easier to discern when considering employment growth. They find that a 10 percent increase in minimum wage is associated with as much as 0.5 percentage point lower aggregate employment growth. These estimates are very large, as John Schmitt explains in a recent post, and far outside the range in the existing literature. But are they right?
As I show in a new paper, the short answer is: no. The negative association between job growth and minimum wages is in the wrong place: it shows up in a sector like manufacturing that has few minimum wage workers, but is absent in low-wage sectors like food services and retail. In other words, it is likely a statistical artifact, and not a causal relationship...

Friday, September 27, 2013

Have Blog, Will Travel

I am here today:

Finance and the Wealth of Nations Workshop
Federal Reserve Bank of San Francisco
& The Institute of New Economic Thinking

9:00AM - 9:45AM: David Scharfstein and Robin Greenwood (Harvard Business School), “The Growth of Finance”, Discussant: Bradford DeLong (UC Berkeley)

9:45AM-10:30AM: Ariell Reshef (Virginia) and Thomas Philippon (NYU-Stern), “An International Look at the Growth of Modern Finance” , Discussant: Charles Jones (Stanford GSB)

10:45AM -11:30AM: Andrea Eisfeldt (UCLA-Anderson), Andrew Atkeson (UCLA) and Pierre-Olivier Weill (UCLA), “The Financial Soundness of U.S. Firms 1926–2011: Financial Frictions and the Business Cycle”, Discussant: Jonathan Rose (Federal Reserve Board) 

11:30AM -12:15PM: Ross Levine (UC Berkeley-Haas), Yona Rubenstein (LSE), Liberty for More: Finance and Educational Opportunities”, Discussant: Gregory Clark (UC Davis)

1:30PM-2:15PM: Atif Mian (Princeton), Amir Sufi (U. Chicago-Booth), “The Effect Of Interest Rate And Collateral Value Shocks On Household Spending: Evidence from Mortgage Refinancsing”, Discussant: Reuven Glick (SF Fed)

2:15PM-3:00PM: Maurice Obstfeld (UC Berkeley), “Finance at Center Stage: Some Lessons of the Euro Crisis”, Discussant: Giovanni dell'Ariccia (IMF)

3:00PM-3:45PM: Stephen G. Cecchetti and Enisse Kharroubi (BIS), “Why Does Financial Sector Growth Crowd Out Real Economic Growth?”, Discussant: Barry Eichengreen (UC Berkeley) 

4:00PM-4:45PM: Thorsten Beck (Tilburg), “Financial Innoation: The Bright and the Dark Sides”, Discussant: Sylvain Leduc (SF Fed) 

4:45PM-5:30PM: Alan M. Taylor (UC Davis), Òscar Jordà (SF Fed/UC Davis), Moritz Schularick (Bonn), “Sovereigns versus Banks: Crises, Causes and Consequences”, Discussant: Aaron Tornell (UCLA)

6:15PM: Keynote Speaker, Introduction: John Williams (SF Fed, President), Lord Adair Turner (INET, Senior Fellow; former Chairman of the UK Financial Services Authority), "Credit, Money and Leverage"

Thursday, September 12, 2013

New Research in Economics: Rational Bubbles

New research on rational bubbles from George Waters:

Dear Mark,

I’d like to take you up on your offer to publicize research. I’ve spent a good chunk of my time (along with Bill Parke) over the last decade developing an asset price model with heterogeneous expectations, where agents are allowed to adopt a forecast based on a rational bubble.

The idea of a rational bubble has been around for quite a while, but there has been little effort to explain how investors would coordinate on such a forecast when there is a perfectly good alternative forecast based on fundamentals. In our model agents are not assumed to use either forecast but are allowed to switch between forecasting strategies based on past performance, according to an evolutionary game theory dynamic.

The primary theoretical point is to provide conditions where agents coordinate on the fundamental forecast in accordance with the strong version of the efficient markets hypothesis. However, it is quite possible that agents do not always coordinate on the fundamental forecast, and there are periods of time when a significant fraction of agents adopt a bubble forecast. There are obvious implications about assuming a unique rational expectation.

A more practical goal is to model the endogenous formation and collapse of bubbles. Bubbles form when there is a fortuitous correlation between some extraneous information and the fundamentals, and agents are sufficiently aggressive about switching to better performing strategies. Bubbles always collapse due to the presence of a small fraction of agents who do not abandon fundamentals, and the presence of a reflective forecast, a weighted average of the other two forecasts, that is the rational forecast in the presence of heterogeneity.

There are strong empirical implications. The asset price is not forecastable, so the weak version of the efficient markets hypothesis is satisfied. Simulated data from the model shows excess persistence and variance in the asset price and ARCH effects and long memory in the returns.

There is much more work to be done to connect the approach to the literature on the empirical detection of bubbles, and to develop models with dynamic switching between heterogeneous strategies in more sophisticated macro models.

A theoretical examination of the model is forthcoming in Macroeconomic Dynamics.

A more user friendly exposition of the model and the empirical implications is here.

An older published paper (Journal of Economic Dynamics and Control 31(7)) focuses on ARCH effects and long memory.

Dr. George Waters
Associate Professor of Economics
Illinois State University

Sunday, September 01, 2013

'Limited Time Offer! Temporary Sales and Price Rigidities'

Are prices rigid? (For background and a discussion of previous evidence on price rigidity at both aggregated and disaggregated levels, see this post.) This is via Carola Binder:

Limited Time Offer! Temporary Sales and Price Rigidities: Even though prices change frequently, this does not necessarily mean that prices are very flexible, according to a new paper by Eric Anderson, Emi Nakamura, Duncan Simester, and Jón Steinsson. In "Informational Rigidities and the Stickiness of Temporary Sales," these authors note that it is important to distinguish temporary sales from regular price changes when analyzing the frequency of price adjustment and the response of prices to macroeconomic shocks.
"The literature on price rigidity can be divided into a literature on "sticky prices" and a literature on "sticky information" (which gives rise to sticky plans). A key question in interpreting the extremely high frequencies of price change observed in retail price data is whether these frequent price changes facilitate rapid responses to changing economic conditions, or whether some of these price changes are part of “sticky plans” that are determined substantially in advance and therefore not responsive to changing conditions. ...
They provide some interesting institutional features of temporary sales and promotions...
They conclude that regular (non-sale) prices exhibit stickiness, while temporary sale prices follow "sticky plans" that are relatively unresponsive in the short run to macroeconomic shocks:
"Our analysis suggests that regular prices are sticky prices that change infrequently but are responsive to macroeconomic shocks, such as the rapid run-up and decline of oil prices. In contrast, temporary sales follow sticky plans. These plans include price discounts of varying depth and frequency across products. But, the plans themselves are relatively unresponsive in the near term to macroeconomic shocks. We believe that this characterization of regular and sale prices as sticky prices versus sticky plans substantially advances an ongoing debate about the extent of retail price fluctuations and offers deeper insight into how retail prices adjust in response to macroeconomic shocks."

Monday, August 19, 2013

'Making Do With Less: Working Harder During Recessions'

New paper:

Making Do With Less: Working Harder During Recessions, by Edward P. Lazear, Kathryn L. Shaw, Christopher Stanton, NBER Working Paper No. 19328 Issued in August 2013: There are two obvious possibilities that can account for the rise in productivity during recent recessions. The first is that the decline in the workforce was not random, and that the average worker was of higher quality during the recession than in the preceding period. The second is that each worker produced more while holding worker quality constant. We call the second effect, “making do with less,” that is, getting more effort from fewer workers. Using data spanning June 2006 to May 2010 on individual worker productivity from a large firm, it is possible to measure the increase in productivity due to effort and sorting. For this firm, the second effect—that workers’ effort increases—dominates the first effect—that the composition of the workforce differs over the business cycle.

Friday, July 26, 2013

How Anti-Poverty Programs Go Viral

This is a summary of research by Esther Duflo, Abhijit Banerjee, Arun Chandrasekhar, and Matthew Jackson on the spread of information about government programs through social networks:

How anti-poverty programs go viral, by Peter Dizikes, MIT News Office: Anti-poverty researchers and policymakers often wrestle with a basic problem: How can they get people to participate in beneficial programs? Now a new empirical study co-authored by two MIT development economists shows how much more popular such programs can be when socially well-connected citizens are the first to know about them.
The economists developed a new measure of social influence that they call “diffusion centrality.” Examining the spread of microfinance programs in rural India, the researchers found that participation in the programs increases by about 11 percentage points when well-connected local residents are the first to gain access to them.
“According to our model, when someone with high diffusion centrality receives a piece of information, it will spread faster through the social network,” says Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation at MIT. “It could thus be a guide for an organization that tries to [place] a piece of information in a network.”
The researchers specifically wanted to study how knowledge about a program spreads by word of mouth, MIT professor Abhijit Banerjee says, because “while there was a body of elegant theory on the relation between what the network looks like and the speed of transmission of information, there was little empirical work on the subject.”
The paper, titled “The Diffusion of Microfinance,” is published today in the journal Science. ...
Microfinance is the term for small-scale lending, popularized in the 1990s, that can help relatively poor people in developing countries gain access to credit they would not otherwise have. The concept has been the subject of extensive political debate; academic researchers are still exploring its effects across a range of economic and geographic settings.
“Microfinance is the type of product which is very interesting to study,” Duflo says, “because in many cases it won’t be well known, and hence there is a role for information diffusion.” Moreover, she notes, “It is also the kind of product on which people could have strongly held … opinions.” So, she says, understanding the relationship between social structure and adoption could be particularly important.
Other scholars believe the findings are valuable. Lori Beaman, an economist at Northwestern University, says the paper “significantly moves forward our understanding of how social networks influence people’s decision-making,” and suggests that the work could spur other on-the-ground research projects that study community networks in action.
“I think this work will lead to more innovative research on how social networks can be used more effectively in promoting poverty alleviation programs in poor countries,” adds Beaman... “Other areas would include agricultural technology adoption … vaccinations for children, [and] the use of bed nets [to prevent malaria], to name just a few.”  ...

Thursday, June 06, 2013

Orphanides and Wieland: Complexity and Monetary Policy

A paper I need to read:

Complexity and Monetary Policy, by Athanasios Orphanides and Volker Wieland, CFS Working Paper: Abstract The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.

Wednesday, May 29, 2013

'Inflation in the Great Recession and New Keynesian Models'

DSGE models are "surprisingly accurate":

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide: It has been argued that existing DSGE models cannot properly account for the evolution of key macroeconomic variables during and following the recent Great Recession, and that models in which inflation depends on economic slack cannot explain the recent muted behavior of inflation, given the sharp drop in output that occurred in 2008-09. In this paper, we use a standard DSGE model available prior to the recent crisis and estimated with data up to the third quarter of 2008 to explain the behavior of key macroeconomic variables since the crisis. We show that as soon as the financial stress jumped in the fourth quarter of 2008, the model successfully predicts a sharp contraction in economic activity along with a modest and more protracted decline in inflation. The model does so even though inflation remains very dependent on the evolution of both economic activity and monetary policy. We conclude that while the model considered does not capture all short-term fluctuations in key macroeconomic variables, it has proven surprisingly accurate during the recent crisis and the subsequent recovery. [pdf]

Saturday, May 18, 2013

New Research in Economics: Self-interest vs. Greed and the Limitations of the Invisible Hand

This is from Matt Clements, Associate Professor and Chair of the Economics Department at St. Edward’s University:

Dear Professor Thoma,
Allow me to add to the flood of responses you have no doubt received to your offer to help publicize your readers’ research. The paper is called "Self-interest vs. Greed and the Limitations of the Invisible Hand," forthcoming in the American Journal of Economics and Sociology (pdf of the final version). The point of the paper is that greed, as opposed to enlightened self-interest, can be destructive. Markets always operate within some framework of laws and enforcement, and the claim that greed is good implicitly assumes that the legal framework is essentially perfect. To the extent that laws are suboptimal and enforcement is imperfect, greed can easily enrich some market participants at the expense of total surplus. All of this seemed sufficiently obvious to me that at first I wondered if the paper was even worth writing, but the referees were surprisingly difficult to convince.

Thursday, May 16, 2013

New Research in Economics: Terrorism and the Macroeconomy: Evidence from Pakistan

This is from Sultan Mehmood. The article appears in the May edition of Defense and Peace Economics, which the author describes as "a highly specialized journal on conflict":

Terrorism and the Macroeconomy: Evidence from Pakistan, by Sultan Mehmood, Journal of Defense and Peace Economics, May 2013: Summary: The study evaluates the macroeconomic impact of terrorism in Pakistan by utilizing terrorism data for around 40 years. Standard time-series methodology allows us to distinguish between short and long run effects, and it also avoids the aggregation problems in cross-country studies. The study is also one of the few that focuses on evaluating impact of terrorism on a developing country. The results show that cumulatively terrorism has cost Pakistan around 33.02% of its real national income over the sample period.
Motivation: Studies on the impact of terrorism on the economy have exclusively focused on developed countries (see e.g. Eckstein and Tsiddon, 2004). This is surprising because developing countries are not only hardest hit by terrorism, but are more responsive to external shocks. Terrorism in Pakistan, with magnitude greater than Israel, Greece, Turkey, Spain and USA combined in terms of incidents and death count, has consistently hit news headlines across the world. Yet, terrorism in Pakistan has received relatively little academic attention.
The case of Pakistan is unique for studying the impact of terrorism on the economy for a number of reasons. Firstly, Pakistan has a long and intense history of terrorism which allows one to capture the effect on the economy in the long run. Secondly, growth retarding effects of terrorism are hypothesized to be more pronounced in developing rather than developed countries (Frey et al., 2007). Thirdly, the Pakistani economy is exceptionally vulnerable to external shocks with 12 IMF programmes during 1990-2007 (IMF, 2010, 2011). Lastly, the case study of terrorism for a developing or least developing country is yet to be done. Scholars of the Copenhagen Consensus studying terrorism note the ‘need for additional case studies, especially of developing countries’ (Enders and Sandler, 2006, p. 31). This research attempts to fill this void.
Main Results: The results of the econometric investigation suggest that terrorism has cost Pakistan around 33.02% of its real national income over the sample time period of 1973–2008, with the adverse impact mainly stemming from a fall in domestic investment and lost workers’ remittances from abroad. This averages to a per annum loss of around 1% of real GDP per capita growth. Moreover, estimates from a Vector Error Correction Model (VECM) show that terrorism impacts the economy primarily through medium- and long-run channels. The article also finds that the negative effect of terrorism lasts for at least two years for most of the macroeconomic variables studied, with the adverse effect on worker remittances, a hitherto ignored factor, lasting for five years. The results are robust to different lag length structures, policy variables, structural breaks and stability tests. Furthermore, it is shown that they are unlikely to be driven by omitted variables, or [Granger type] reverse causality.
Hence, the article finds evidence that terrorism, particularly in emerging economies, might pose significant macroeconomic costs to the economy.

New Research in Economics: Robust Stability of Monetary Policy Rules under Adaptive Learning

I have had several responses to my offer to post write-ups of new research that I'll be posting over the next few days (thanks!), but I thought I'd start with a forthcoming paper from a former graduate student here at the University of Oregon, Eric Guass:

Robust Stability of Monetary Policy Rules under Adaptive Learning, by Eric Gaus, forthcoming, Southern Economics Journal: Adaptive learning has been used to assess the viability a variety of monetary policy rules. If agents using simple econometric forecasts "learn" the rational expectations solution of a theoretical model, then researchers conclude the monetary policy rule is a viable alternative. For example, Duffy and Xiao (2007) find that if monetary policy makers minimize a loss function of inflation, interest rates, and the output gap, then agents in a simple three equation model of the macroeconomy learn the rational expectations solution. On the other hand, Evans and Honkapohja (2009) demonstrates that this may not always be the case. The key difference between the two papers is an assumption over what information the agents of the model have access to. Duffy and Xiao (2007) assume that monetary policy makers have access to contemporaneous variables, that is, they adjust interest rates to current inflation and output. Evans and Honkapohja (2009) instead assume that agents only can form expectations of contemporaneous variables. Another difference between these two papers is that in Duffy and Xiao (2007) agents use all the past data they have access to, whereas in Evans and Honkapohja (2009) agents use a fixed window of data.
This paper examines several different monetary policy rules under a learning mechanism that changes how much data agents are using. It turns out that as long as the monetary policy makers are able to see contemporaneous endogenous variables (output and inflation) then the Duffy and Xiao (2007) results hold. However, if agents and policy makers use expectations of current variables then many of the policy rules are not "robustly stable" in the terminology of Evans and Honkapohja (2009).
A final result in the paper is that the switching learning mechanism can create unpredictable temporary deviations from rational expectations. This is a rather starting result since the source of the deviations is completely endogenous. The deviations appear in a model where there are no structural breaks or multiple equilibria or even an intention of generating such deviations. This result suggests that policymakers should be concerned with the potential that expectations, and expectations alone, can create exotic behavior that temporarily strays from the REE.

Wednesday, May 15, 2013

Help Me Publicize Your Research

The previous post reminds me of an offer I've been meaning to make to try to help to publicize academic research:

If you have a paper that is about to be published in an economics journal (or was recently published), send me a summary of the research explaining the findings, the significance of the work, etc. and I'd be happy to post the write-up here. It can be micro, macro, econometrics, any topic at all, but hoping for something that goes beyond a mere echo of the abstract and I want to avoid research not yet accepted for publication (so I don't have to make a judgment on the quality of the research -- I don't always have the time to read papers carefully, and they may not be in my area of expertise).

Homeowners Do Not Increase Consumption When Their Housing Prices Increase?

New and contrary results on the wealth effect for housing:

Homeowners do not increase consumption despite their property rising in value, EurekAlert: Although the value of our property might rise, we do not increase our consumption. This is the conclusion by economists from University of Copenhagen and University of Oxford in new research which is contrary to the widely believed assumption amongst economists that if there occurs a rise in house prices then a natural rise in consumption will follow. The results of the study is published in The Economic Journal.
"We argue that leading economists should not wholly be focused on monitoring the housing market. Economists are closely watching the developments on the housing market with the expectation that house prices and household consumption tend to move in tandem, but this is not necessarily the case," says Professor of Economics at University of Copenhagen, Søren Leth-Petersen.
Søren Leth-Petersen has, alongside Professor Martin Browning from University of Oxford and Associate Professor Mette Gørtz from University of Copenhagen, tested this widespread assumption of 'wealth effect' and concluded that the theory has no significant effect.
Søren Leth-Petersen explains that when economists use the theory of  'wealth effect' the presumption is that older homeowners will adjust their consumption the most when house prices change whilst younger homeowners will adjust their consumption the least. However, according to this research, most homeowners do not feel richer in line with the rise of housing wealth.
"Our research shows that homeowners aged 45 and over, do not increase their consumption significantly when the value of their property goes up, and this goes against the theory of 'wealth effect'. Thus, we are able to reject the theory as the connecting link between rising house prices and increased consumption," explains Søren Leth-Petersen. ...
The research shows that homeowners aged 45 and over did not react significantly to the rise in house prices. However, the younger homeowners, who are typically short of finances, took the opportunity to take out additional consumption loans when given the chance. ...

Tuesday, April 16, 2013

'How Much Unemployment Was Caused by Reinhart and Rogoff's Arithmetic Mistake?'

The work of Reinhart and Rogoff was a major reason for the push for austerity at a time when expansionary policy was called for, i.e. their work supported the bad idea that austerity during a recession can actually be stimulative. It isn't as the events in Europe have shown conclusively.

To be fair, as I discussed here (in "Austerity Can Wait for Sunnier Days") after watching Reinhart give a talk on this topic at an INET conference, she didn't assert that contractionary policy was somehow expansionary (i.e. she did not claim the confidence fairy would more than offset the negative short-run effects of austerity). What she asserted is that pain now -- austerity -- can avoid even more pain down the road in the form of lower economic growth.

Here's the problem. She is right that austerity causes pain in the short-run. But according to a review of her work with Rogoff discussed below, the lower growth from debt levels above 90 percent that austerity is supposed to avoid turns out, it appears, to be largely the result of errors in the research. In fact, there is no substantial growth penalty from high debt levels, and hence not much gain from short-run austerity.

Here's Dean Baker with a rundown on the new work (see also Mike Konczal who helped to shed light on this research):

How Much Unemployment Was Caused by Reinhart and Rogoff's Arithmetic Mistake?, by Dean Baker: That's the question millions will be asking when they see the new paper by my friends at the University of Massachusetts, Thomas Herndon, Michael Ash, and Robert Pollin. Herndon, Ash, and Pollin (HAP) corrected the spreadsheets of Carmen Reinhart and Ken Rogoff. They show the correct numbers tell a very different story about the relationship between debt and GDP growth than the one that Reinhart and Rogoff have been hawking.
Just to remind folks, Reinhart and Rogoff (R&R) are the authors of the widely acclaimed book on the history of financial crises, This Time is Different. They have also done several papers derived from this research, the main conclusion of which is that high ratios of debt to GDP lead to a long periods of slow growth. Their story line is that 90 percent is a cutoff line, with countries with debt-to-GDP ratios above this level seeing markedly slower growth than countries that have debt-to-GDP ratios below this level. The moral is to make sure the debt-to-GDP ratio does not get above 90 percent.
There are all sorts of good reasons for questioning this logic. First, there is good reason for believing causation goes the other way. Countries are likely to have high debt-to-GDP ratios because they are having serious economic problems.
Second, as Josh Bivens and John Irons have pointed out, the story of the bad growth in high debt years in the United States is driven by the demobilization after World War II. In other words, these were not bad economic times, the years of high debt in the United States had slow growth because millions of women opted to leave the paid labor force.
Third, the whole notion of public debt turns out to be ill-defined. ...
But HAP tells us that we need not concern ourselves with any arguments this complicated. The basic R&R story was simply the result of them getting their own numbers wrong.
After being unable to reproduce R&R's results with publicly available data, HAP were able to get the spreadsheets that R&R had used for their calculations. It turns out that the initial results were driven by simple computational and transcription errors. The most important of these errors was excluding four years of growth data from New Zealand in which it was above the 90 percent debt-to-GDP threshold..., correcting this one mistake alone adds 1.5 percentage points to the average growth rate for the high debt countries. This eliminates most of the falloff in growth that R&R find from high debt levels. (HAP find several other important errors in the R&R paper, however the missing New Zealand years are the biggest part of the story.)
This is a big deal because politicians around the world have used this finding from R&R to justify austerity measures that have slowed growth and raised unemployment. In the United States many politicians have pointed to R&R's work as justification for deficit reduction even though the economy is far below full employment by any reasonable measure. In Europe, R&R's work and its derivatives have been used to justify austerity policies that have pushed the unemployment rate over 10 percent for the euro zone as a whole and above 20 percent in Greece and Spain. In other words, this is a mistake that has had enormous consequences.
In fairness, there has been other research that makes similar claims, including more recent work by Reinhardt and Rogoff. But it was the initial R&R papers that created the framework for most of the subsequent policy debate. And HAP has shown that the key finding that debt slows growth was driven overwhelmingly by the exclusion of 4 years of data from New Zealand.
If facts mattered in economic policy debates, this should be the cause for a major reassessment of the deficit reduction policies being pursued in the United States and elsewhere. It should also cause reporters to be a bit slower to accept such sweeping claims at face value.
(Those interested in playing with the data itself can find it at the website for the Political Economic Research Institute.)

Update: Reinhart-Rogoff Response to Critique - WSJ.

Monday, March 18, 2013

Trickle-Down Consumption

Robert Frank, who has been arguing for effects, will like the results in this paper from the NBER:

Trickle-Down Consumption, by Marianne Bertrand and Adair Morse, NBER Working Paper No. 18883 Issued in March 2013 [open link]: Have rising income and consumption at the top of income distribution since the early 1980s induced households in the lower tiers of the distribution to consume a larger share of their income? Using state-year variation in income level and consumption in the top first quintile or decile of the income distribution, we find evidence for such “trickle-down consumption.” The magnitude of effect suggests that middle income households would have saved between 2.6 and 3.2 percent more by the mid-2000s had incomes at the top grown at the same rate as median income. Additional tests argue against permanent income, upwardly-biased expectations of future income, home equity effects and upward price pressures as the sole explanations for this finding. Instead, we show that middle income households’ consumption of more income elastic and more visible goods and services appear particularly responsive to top income levels, consistent with supply-driven demand and status-driven explanations for our primary finding. Non-rich households exposed to higher top income levels self-report more financial duress; moreover, higher top income levels are predictive of more personal bankruptcy filings. Finally, focusing on housing credit legislation, we suggest that the political process may have internalized and facilitated such trickle-down

Here's a nice discussion of the work from Chrystia Freeland (and why it will make Robert Frank happy): Trickle-down consumption.

Friday, March 15, 2013

Journal News (BE Journal of Theoretical Economics)

Resignations at the BE Journal of Theoretical Economics

Friday, March 08, 2013

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates

Watching John Williams give this paper:

Measuring the Effect of the Zero Lower Bound on Medium- and Longer-Term Interest Rates, by Eric T. Swanson and John C. Williams, Federal Reserve Bank of San Francisco, January 2013: Abstract The federal funds rate has been at the zero lower bound for over four years, since December 2008. According to many macroeconomic models, this should have greatly reduced the effectiveness of monetary policy and increased the efficacy of fiscal policy. However, standard macroeconomic theory also implies that private-sector decisions depend on the entire path of expected future short-term interest rates, not just the current level of the overnight rate. Thus, interest rates with a year or more to maturity are arguably more relevant for the economy, and it is unclear to what extent those yields have been constrained. In this paper, we measure the effects of the zero lower bound on interest rates of any maturity by estimating the time-varying high-frequency sensitivity of those interest rates to macroeconomic announcements relative to a benchmark period in which the zero bound was not a concern. We find that yields on Treasury securities with a year or more to maturity were surprisingly responsive to news throughout 2008–10, suggesting that monetary and fiscal policy were likely to have been about as effective as usual during this period. Only beginning in late 2011 does the sensitivity of these yields to news fall closer to zero. We offer two explanations for our findings: First, until late 2011, market participants expected the funds rate to lift off from zero within about four quarters, minimizing the effects of the zero bound on medium- and longer-term yields. Second, the Fed’s unconventional policy actions seem to have helped offset the effects of the zero bound on medium- and longer-term rates.

Tuesday, March 05, 2013

'Are Sticky Prices Costly? Evidence From The Stock Market'

There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Saturday, March 02, 2013

Booms and Systemic Banking Crises

Everyone at the conference seemed to like this model of endogenous banking crises (me included -- this is the non-technical summary, the paper itself is fairly technical):

Booms and Systemic Banking Crises, by Frederic Boissay, Fabrice Collard, and Frank Smets: ... Non-Technical Summary Recent empirical research on systemic banking crises (henceforth, SBCs) has highlighted the existence of similar patterns across diverse episodes. SBCs are rare events. Recessions that follow SBC episodes are deeper and longer lasting than other recessions. And, more importantly for the purpose of this paper, SBCs follow credit intensive booms; "banking crises are credit booms gone wrong" (Schularick and Taylor, 2012, p. 1032). Rare, large, adverse financial shocks could possibly account for the first two properties. But they do not seem in line with the fact that the occurrence of an SBC is not random but rather closely linked to credit conditions. So, while most of the existing macro-economic literature on financial crises has focused on understanding and modeling the propagation and the amplification of adverse random shocks, the presence of the third stylized fact mentioned above calls for an alternative approach.
In this paper we develop a simple macroeconomic model that accounts for the above three stylized facts. The primary cause of systemic banking crises in the model is the accumulation of assets by households in anticipation of future adverse shocks. The typical run of events leading to a financial crisis is as follows. A sequence of favorable, non permanent, supply shocks hits the economy. The resulting increase in the productivity of capital leads to a demand-driven expansion of credit that pushes the corporate loan rate above steady state. As productivity goes back to trend, firms reduce their demand for credit, whereas households continue to accumulate assets, thus feeding the supply of credit by banks. The credit boom then turns supply-driven and the corporate loan rate goes down, falling below steady state. By giving banks incentives to take more risks or misbehave, too low a corporate loan rate contributes to eroding trust within the banking sector precisely at a time when banks increase in size. Ultimately, the credit boom lowers the resilience of the banking sector to shocks, making systemic crises more likely.
We calibrate the model on the business cycles in the US (post WWII) and the financial cycles in fourteen OECD countries (1870-2008), and assess its quantitative properties. The model reproduces the stylized facts associated with SBCs remarkably well. Most of the time the model behaves like a standard financial accelerator model, but once in while -- on average every forty years -- there is a banking crisis. The larger the credit boom, (i) the higher the probability of an SBC, (ii) the sooner the SBC, and (iii) -- once the SBC breaks out -- the deeper and the longer the recession. In our simulations, the recessions associated with SBCs are significantly deeper (with a 45% larger output loss) than average recessions. Overall, our results validate the role of supply-driven credit booms leading to credit busts. This result is of particular importance from a policy making perspective as it implies that systemic banking crises are predictable. We indeed use the model to compute the k-step ahead probability of an SBC at any point in time. Fed with actual US data over the period 1960-2011, the model yields remarkably realistic results. For example, the one-year ahead probability of a crisis is essentially zero in the 60-70s. It jumps up twice during the sample period: in 1982-3, just before the Savings & Loans crisis, and in 2007-9. Although very stylized, our model thus also provides with a simple tool to detect financial imbalances and predict future crises.