One more quick one from the airport -- Richard Thaler attempts to nudge people away from the idea that the president can control gas prices (and he calls for an increase in the gas tax):
Why Gas Prices Are Out of Any President’s Control, by Richard Thaler, Commentary, NY Times: Everyone knows it’s dangerous to ingest gasoline or to inhale its fumes. But I am starting to believe that merely thinking about the price of gasoline can damage cognitive processing. Thus I may be risking some of my precious few remaining brain cells by writing about that topic.
Here is a one-item test to see whether you are guilty of cloudy thinking about gas prices: Do you believe that they are something a president can control? Many Americans believe that the answer is yes, but any respectable economist will tell you that the answer is no.
Consider a recent poll of a panel of economists conducted by the University of Chicago Booth School of Business, where I teach. ... The 41 panel members were asked whether they agreed with the following statement: “Changes in U.S. gasoline prices over the past 10 years have predominantly been due to market factors rather than U.S. federal economic or energy policies.”
Not a single member of the panel disagreed with the statement.
Here is why: Oil is a global market in which America is a big consumer but a small supplier. ...[continue reading]...
Bullish It, by knzn: ...Smith’s blog leads me to think about the issue of macroeconomics as a field. It seems (especially from the comment thread) that the Old Keynesians and the New Monetarists are at each other’s throats (but, interestingly, the newly christened Market Monetarists – who have some claim to being the legitimate intellectual heirs of the Old Monetarists – basically seem to be on the same side as the Old Keynesians on the major issues here; and the New Keynesians can break for either side depending on whether they’re more Keynesian or more New). Obviously I’m more sympathetic to the Old Keynesians than the New Monetarists, otherwise maybe my pseudonym would be “dsge” instead of “knzn.”
Here’s my take: to begin with, economics is basically bulls**t. I mean, it’s necessary bulls**t, sometimes even useful bulls**t, but I’m extremely skeptical of people who think economics is a science or that it could be a science. We have to make policy decisions (and investment decisions and personal consumption decisions etc.), and we have to have some basis for making them. We could just use intuition, and we often do, but it’s helpful to use logical thought and empirical data also, and systematic study using fields like economics can help us to clarify our intuition, our logical arguments, and our interpretation of the empirical data. The same way that bulls**t discussions that don’t make any pretense at being science can help.
Economics is bulls**t because it relies on the premise that human beings behave in a systematic way, and they don’t. Once you have done enough research to convince yourself that they behave in a certain way, they will change and start behaving in another way. Particularly if they read your research and realize that you’re trying to manipulate them by expecting them to continue behaving the way they have. But even if they don’t read your research, they may change the way they behave just because the zeitgeist changes – cultural sunspots, if you will.
The last paragraph may vaguely remind you of the Lucas critique. Lucas basically said that macroeconomics (as it was being practiced at the time) was bulls**t, but he held out the hope that it could receive micro-foundations that wouldn’t be bulls**t. The problem with Lucas’ argument, though, is that microeconomics is also bulls**t. And Noah Smith, writing some 36 years after the Lucas critique and observing its unwholesome results, takes it one step further by saying, if I may paraphrase, “Yes, the microeconomics upon which modern macro has now been founded is indeed bulls**t, but if we do the micro right, then we can come up with non-bulls**t macro.”
Yeah, I doubt it. Maybe we can come up with slightly better macro than what we’ve got now, but the underlying micro is never going to be right. Experimental results involving human subjects are inevitably subject to the micro version of the Lucas critique: once the results become well-known, they become part of a new environment that determines a new set of behavior. And the zeitgeist will screw with them also. And so on. And in any case, even if the results were robust, I’m skeptical that we can really build them into a macro model or that it would be worth the trouble even if we could. Economics will always be bulls**t.
Now there’s a case for doing rigorous bulls**t, at least as a potentially useful exercise. That’s what I think DSGE modeling is: it’s a potentially useful exercise in rigorous bulls**t. And I don’t begrudge the work of people like Steve Williamson: I think there's some rigorous bulls**t there that may be worth talking about. But in general, when it comes to bulls**t, there is not a monotonic relationship between rigor and usefulness. And to put all your eggs in the rigorous bulls**t basket – not only that, but in one particular type of rigorous bulls**t basket, because rigor does not live by rational equilibrium alone – is something that not even Pudd’nhead Wilson could advocate.
So I’m going to stick with sloppy Old Keynesian models as my main mode of macroeconomic analysis. They’re bulls**t. They’re not rigorous bulls**t. But as bulls**t goes, they’re pretty useful. A lot more useful than unaided intuition. And they’re easy enough to understand that we can have a reasonable idea of where their unrealistic assumptions are likely to lead us astray. Of course all economic models have unrealistic assumptions, but hopefully our intuition allows us to correct for that condition when applying the models to the real world. If the model is too complicated for the typical economist to understand how the assumptions generate the conclusions, then the unrealism becomes a real problem.
When you need an answer fast to a question that the newer models don't address sufficiently, and there are many important questions that fall into this category, and when you don't have time to build a new model before needing to answer -- a situation policymakers face constantly -- then the Old Keynesian IS-LM/MP model can fill the void. It is very easy to use for most questions, in part because it has been explored so thoroughly over the decades. I suspect knzn faces this situation often in his job in finance, i.e. he needs an answer today, wants a model for guidance, doesn't have time to build a full blown dsge model, simulate it, etc. and the IS-LM/MP model can fill the void.
But if this approach is adopted, I think it's important not to forget the lessons of the more modern models. For example, the old and new IS curves differ by how they handle expectations of the future. The new model accounts for this, the old models don't. If changes in expectations about the future are arguably unimportant, and other important differences in the models are similarly unimportant, then the old IS-LM/MP model can provide a good approximation. But when these expectations are important, using the old models can cause you to miss important feedback effects from the expected future to the actual present.
The best of both worlds is, I think, better than either alone. The art is knowing what is "best" in each of the two models.
Discrepancies Between National Income and GDP, by Dean Baker: Binyamin Appelbaum has a NYT blogpost suggesting that the economy may be growing more rapidly than the GDP imply based on the fact that national income has grown more rapidly in recent quarters. ...
Appelbaum's ... points to a new paper that suggests that we should be taking an average of GDP growth and income growth as our actual measure of economic growth. If we go this route, then it implies that the recovery has been somewhat stronger (and the recession steeper) than the standard measure of GDP growth.
There is an alternative story. David Rosnick and I analyzed the movement of the statistical discrepancy and found a strong inverse correlation between the size of the statistical discrepancy and capital gains in the stock market and housing. This meant, for example there was a large negative statistical discrepancy in 1999 and 2000 at the peak of the stock bubble (i.e. income exceeded output) which disappeared after the bubble burst.
The same thing happened in the peak years of the housing bubble, 2004-2007. In that case also, the large gap between the income side measure and the output side measure disappeared after the bubble burst.
The logic is simple. Some amount of capital gains will get misclassified in the national accounts as ordinary income. (Capital gains should not count as income for GDP purposes.) While this may always be true, when we have more capital gains, the amount of capital gains misclassified in this way will be greater.
This story fits the data pretty well. If our analysis is correct, then we are better off sticking with our old friend GDP as the best measure of economic growth.
I've been trying to get the Federal Reserve banks to engage more with the public through blogs, with economics bloggers in particular. We'll see how that goes, but it's encouraging to see that they are starting to converse and debate among themselves:
These facts, according to the authors, provide support to the hypothesis that problems in the labor market cannot be blamed on the degree of mismatch between displaced construction workers and job vacancies in other sectors.
In this post, we present an alternative view of the fate of unemployed construction workers...
Slow and Steady, by Tim Duy: Looking at the spending component of this morning's Personal Income and Outlays report for February, it still pays to focus on the path of spending rather than to become terribly hopeful or despondent about the twists and turns along that path:
The 0.5% gain in February compensated for some earlier weakness in the numbers, while the overall trend holds - spending is rising about 0.18 percent per month compared to 0.24 percent prior to the recession. Spending was supported by a drop in the saving rate, down to 3.7% from 4.3% the previous month. This likely reflects borrowing for new auto purchases - note the stronger trend in durable goods spending:
The acceleration in auto sales is clearly supporting this trend since the middle of last year. Apparently, what's good for Detroit is still good for America. The importance of autos in sustaining spending begs the question of what will occur when pent up demand is satisfied? Obviously, auto sales will stop contributing positively to growth as sales level off at some point in what I would expect to be the not to distant future. This is especially the case considering the anemic pace of personal income growth:
Hopefully, income growth will accelerate as the labor market improves. Otherwise, households will need to take on additional debt or running down saving rates to hold the current trend in place.
Bottom Line: Consumer spending continues to rise, although the sustainability is still called into question because of the reliance of pent-up demand and falling saving rates to support underlying trends. That said, for all the ups and downs in the monthly data, the trend has generally been upward at a pace that is disappointing compared to pre-recession trends. Slow and steady has been the best bet.
Inflation: Still Nothing to See Here, By Tim Duy: The Februrary Personal Income and Outlays report came out this morning, and with it a fresh read on the Federal Reserve's preferred inflation measure, the PCE price index. On a year-over-year basis, headline inflation is trending down to the 2% target, while core is settling in just below that target.
As a reminder, the Fed targets headline over the longer run, but watches core as a signal to where headline is headed. Headline is trending down to core, as expected. The Fed was right to dismiss last year's energy-induced headline increase as a temporary phenomenon. Is there any near term trends to be concerned about? The three-month core trend edged down a notch to just above 2%:
Still less than the rise experienced in the first part of 2011. What about the path of prices? Still tracking along a trend below that of prior to the recession:
Opportunistic disinflation at work.
Bottom Line: Inflation remains contained - by itself, price trends provide no reason for the Fed to turn hawkish. Moreover, there is nothing here to stop Federal Reserve Chairman Ben Bernanke from easing policy should the US recovery falter.
Friday, March 30, the Kauffman Foundation is hosting the fourth annual Economics Bloggers Forum in Kansas City, MO. Check back here at growthology.org for a live stream of the event starting around 8:30 AM. There will be presentations from some of your favorite bloggers. An agenda follows below. [agenda]
The Supreme Court is undermining the public's confidence in its ability to stand above politics:
Broccoli and Bad Faith, by Paul Krugman, Commentary, NY Times: Nobody knows what the Supreme Court will decide with regard to the Affordable Care Act. But ... it seems quite possible that the court will strike down the “mandate” — the requirement that individuals purchase health insurance — and maybe the whole law. Removing the mandate would make the law much less workable, while striking down the whole thing would mean denying health coverage to 30 million or more Americans.
Given the stakes, one might have expected all the court’s members to be very careful... In reality, however,... antireform justices appeared to embrace any argument, no matter how flimsy, that they could use to kill reform.
Let’s start with the already famous exchange in which Justice Antonin Scalia compared the purchase of health insurance to the purchase of broccoli... That comparison horrified health care experts ... because health insurance is nothing like broccoli.
Why? When people choose not to buy broccoli, they don’t make broccoli unavailable to those who want it. But when people don’t buy health insurance until they get sick — which is what happens in the absence of a mandate — the resulting worsening of the risk pool makes insurance more expensive, and often unaffordable, for those who remain. As a result, unregulated health insurance basically doesn’t work, and never has.
There are at least two ways to address this reality... One is to tax everyone ... and use the money raised to provide health coverage. That’s what Medicare and Medicaid do. The other is to require that everyone buy insurance, while aiding those for whom this is a financial hardship.
Are these fundamentally different approaches? ... Here’s what Charles Fried — who was Ronald Reagan’s solicitor general — said..: “I’ve never understood why regulating by making people go buy something is somehow more intrusive than regulating by making them pay taxes and then giving it to them.” ... (By the way, another pet conservative project — private accounts to replace Social Security — relies on, yes, mandatory contributions from individuals.)
So has there been a real change in legal thinking here? Mr. Fried thinks that it’s just politics — and other discussions in the hearings strongly support that perception. ...
As I said, we don’t know how this will go. But it’s hard not to feel a sense of foreboding — and to worry that the nation’s already badly damaged faith in the Supreme Court’s ability to stand above politics is about to take another severe hit.
Skills Mismatch, Construction Workers, and the Labor Market, by Richard Crump and Ayşegül Şahin: Recessions and recoveries typically have been times of substantial reallocation in the economy and the labor market, and the current cycle does not appear to be an exception. The speed and smoothness of reallocation depend in part on the structure of the labor market, particularly the degree of mismatch between the characteristics of available workers and newly available jobs. Such mismatches could occur because of differences in skills between workers and jobs (skills mismatch) or because of differences in the location of the available jobs and available workers (geographic mismatch). In this post, we focus on skills mismatch to assess the extent to which the slow pace of the labor market recovery from the Great Recession can be attributed to such problems. If skills mismatch is much more severe than usual, we would expect the unemployment rate to remain higher for longer and the workers subject to such mismatch to have worse labor market outcomes.
We concentrate particularly on construction workers, who many have thought are prone to a high degree of skills mismatch because of the housing boom and bust. Contrary to this view, we find that (1) general measures of mismatch, after rising sharply in the recession, are now near their pre-recession level as they continue to display a pronounced cyclical pattern; and (2) construction workers are not experiencing relatively worse labor market outcomes. ...[continue reading]...
These two points support the argument I've been making that many of the structural factors that people are seeing are likely to be temporary and hence should not constrain monetary policy (i.e. to the extent that natural rates have changed, much of the change is temporary).
My daughter Amy runs the communications for Nathan Fletcher, a candidate for mayor of San Diego (formally, she's the Deputy Campaign Manager, Communications -- she played a similar role in Carly Fiorina's senate campaign). Today, Fletcher announced that he is leaving the GOP.
We need two rational, competitive political parties. If this, and similar action from other moderates can help to bring Republicans back to sanity, that would be good for all of us.
[Let me add something: One of the reasons I started blogging just over seven years ago -- just after Bush was reelected --
was that, in my view, the Democratic party's voice had been taken over by the far left and that was hurting the party with the more moderate voters it needed to win elections. Whenever I'd hear the people representing Democrats in the media talk about economic issues, I often wanted to cringe. They weren't helping. Worse, the people representing the party in the media were very poor at countering the market fundamentalism that the other side used so effectively. So I wanted to try to add one voice, however small -- and it was as small as they get at that point -- to the debate. Free markets are an easy story to tell. Whatever the issue, the answer is the same:
get government out of the way and all will be well with the world. Market failure -- the main reason I advocated government intervention (I even had a series of posts on "Market Failure in Everything") -- is a harder sell and I wanted to help. I figured if everyone waited for someone else to do these things they wouldn't get done so one day, on a bit of a whim, I started a blog.
It turns out that maybe I'm not as moderate as I thought, and some of the people I thought were nuts might have had a few things to say worth listening to. But that's another story. What I'm wondering is if the Republicans lose the presidential election, will the more reality based voices within the Republican party begin to exert themselves far more than they have to date? I certainly hope so.]
The Shadow of Depression, by Brad DeLong, Commentary, Project Syndicate: Four times in the past century, a large chunk of the industrial world has fallen into deep and long depressions characterized by persistent high unemployment: the United States in the 1930’s, industrialized Western Europe in the 1930’s, Western Europe again in the 1980’s, and Japan in the 1990’s. Two of these downturns – Western Europe in the 1980’s and Japan in the 1990’s – cast a long and dark shadow on future economic performance.
In both cases, if either Europe or Japan returned – or, indeed, ever returns – to something like the pre-downturn trend of economic growth, it took (or will take) decades. In a third case, Europe at the end of the 1930’s, we do not know what would have happened had Europe not become a battlefield following Nazi Germany’s invasion of Poland.
In only one instance was the long-run growth trend left undisturbed: US production and employment after World War II were not significantly affected by the macroeconomic impact of the Great Depression. Of course, in the absence of mobilization for WWII, it is possible and even likely that the Great Depression would have cast a shadow on post-1940 US economic growth. That is certainly how things looked, with high levels of structural unemployment and a below-trend capital stock, at the end of the 1930’s, before mobilization and the European and Pacific wars began in earnest. ...[continue reading]...
People who lose jobs, even if they eventually find new ones, suffer lasting damage to their earnings potential, their health, and the prospects of their children. And the longer it takes to find a new job, the deeper the damage appears to be. ...
Healthcare Jujitsu, by Robert Reich: Not surprisingly,... Supreme Court argument over the so-called “individual mandate” requiring everyone to buy health insurance revolved around epistemological niceties...
Behind this judicial foreplay is the brute political fact that if the Court decides the individual mandate is an unconstitutional extension of federal authority, the entire law starts unraveling.
But with a bit of political jujitsu, the President could turn any such defeat into a victory for a single-payer healthcare system – Medicare for all. Here’s how.
The dilemma at the heart of the new law is that it continues to depend on private health insurers, who have to make a profit... Yet the only way private insurers can afford to cover everyone with pre-existing health problems, as the new law requires, is to have every American buy health insurance – including young and healthier people who are unlikely to rack up large healthcare costs.
This dilemma is the product of political compromise. You’ll remember the Administration couldn’t get the votes for a single-payer system such as Medicare for all. It hardly tried. Not a single Republican would even agree to a bill giving Americans the option of buying into it. ...
Republicans have mastered the art of political jujitsu. Their strategy has been to demonize government and seek to privatize everything that might otherwise be a public program financed by tax dollars (see Paul Ryan’s plan for turning Medicare into vouchers). Then they go to court and argue that any mandatory purchase is unconstitutional because it exceeds the government’s authority.
Obama and the Democrats should do the reverse. If the Supreme Court strikes down the individual mandate in the new health law, private insurers will swarm Capitol Hill demanding that the law be amended to remove the requirement that they cover people with pre-existing conditions.
When this happens, Obama and the Democrats should say they’re willing to remove that requirement – but only if Medicare is available to all, financed by payroll taxes. If they did this the public will be behind them — as will the Supreme Court.
There other ways to forge a "policital compromise" besides this. I support a single payer solution, but I can't see how we get there from here without big changes in the political environment.
According to this, for now and into the foreseeable future, "jobless recoveries will be the norm":
Disentangling the channels of the 2007-2009 recession, by Jim Hamilton: Harvard Professor James Stock and Princeton Professor Mark Watson presented a very interesting paper last week at the Spring 2012 Conference for the Brookings Papers on Economic Activity. Their paper studied similarities and differences between the 2007-2009 recession and other U.S. business cycles.
Stock and Watson characterized the comovements over 1959:Q1-2007:Q3 of 198 different U.S. macroeconomic variables...
Their first question was whether the observed U.S. macroeconomic data continued to track those factors in the same way during the most recent recession and recovery as they had historically. Stock and Watson's answer was, for the most part, yes. ...
But if the Great Recession can be interpreted as normal responses to abnormally large shocks, what about the anemic recovery? Stock and Watson attribute this to a slowdown in trend growth rates... Again quoting from Stock and Watson's paper:
The explanation for this declining trend growth rate which we find the most compelling rests on changes in underlying demographic factors, primarily the plateau over the past decade in the female labor force participation rate (after rising sharply during the 1970s through 1990s) and the aging of the U.S. workforce. Because the net change in mean productivity growth over this period is small, this slower trend growth in employment corresponds directly to slowdown in trend GDP growth. These demographic changes imply continued low or even declining trend growth rates in employment, which in turn imply that future recessions will be deeper, and will have slower recoveries, than historically has been the case. In other words, jobless recoveries will be the norm.
So why are we talking about reducing rather than enhancing social support for the jobless?
This is a bit on the wonkish side, but since I've talked a lot about the difficulties that heterogeneous agents pose in macroeconomics, particularly for aggregation, I thought I should note this review of models with heterogeneous agents:
Macroeconomics with Heterogeneity: A Practical Guide, by Fatih Guvenen, Economic Quarterly, FRB Richmond: This article reviews macroeconomic models with heterogeneous households. A key question for the relevance of these models concerns the degree to which markets are complete. This is because the existence of complete markets imposes restrictions on (i) how much heterogeneity matters for aggregate phenomena and (ii) the types of cross-sectional distributions that can be obtained. The degree of market incompleteness, in turn, depends on two factors: (i) the richness of insurance opportunities provided by the economic environment and (ii) the nature and magnitude of idiosyncratic risks to be insured. First, I review a broad collection of empirical evidence—from econometric tests of "full insurance," to quantitative and empirical analyses of the permanent income ("self-insurance") model that examine how it fits the facts about life-cycle allocations, to studies that try to directly measure where economies place between these two benchmarks ("partial insurance"). The empirical evidence I survey reveals significant uncertainty in the profession regarding the magnitudes of idiosyncratic risks, as well as whether or not these risks have increased since the 1970s. An important difficulty stems from the fact that inequality often arises from a mixture of idiosyncratic risk and fixed (or predictable) heterogeneity, making the two challenging to disentangle. Second, I discuss applications of incomplete markets models to trends in wealth, consumption, and earnings inequality both over the life cycle and over time, where this challenge is evident. Third, I discuss "approximate" aggregation—the finding that some incomplete markets models generate aggregate implications very similar to representative-agent models. What approximate aggregation does and does not imply is illustrated through several examples. Finally, I discuss some computational issues relevant for solving and calibrating such models and I provide a simple yet fully parallelizable global optimization algorithm that can be used to calibrate heterogeneous agent models. View Full Article.
I want to return to the argument about the need for an individual mandate. A post earlier today talks about adverse selection problems in the health insurance market. These problems are driven by the fact that individuals know more about their health status than insurance companies. But there is another reason for insurance mandate as well, moral hazard (and avoiding externalities).
We are, I hope, a compassionate society, one that would not let an individual suffer severe health problems, perhaps even death, if treatment is available. In an emergency, we generally give the care that is needed and ask questions later.
This allows relatively healthy people to go without health insurance secure in the knowledge that if they get hit with a truly catastrophic and expensive to treat illness, society will take care of them. If we could make people pay the full cost of this wager that they won't need insurance, i.e. if society could turn it's back and say you made your choice, now live (or die) with it, a mandate wouldn't be needed. ut we can't (and I wouldn't want to live in a societ that could).
When it comes to Social Security we recognize that people can game the system in this way -- contribute nothing during their lives and rely on the fact that society will provide for them when they are old -- and we force them to contribute. That way, they build up their own retirement funds with a long series of small contributions and, at least in part, pay their own way. They have no choice but to do so. If this didn't happen, other members of society would have to pay this portion of the bill.
I don't see anything wrong with asking people to pay the expected value of their health care -- a mandate to get insurance to cover the catastrophic things that society would cover in any case -- to avoid this type of gaming of the system. Yes, it's true that many healthy people will pay, remain healthy, and seem to get nothing. But that's the wrong way to look at it. They have insurance whether they pay for it or not. Society will not let them die of a standard, treatable illness so insurance services are present. In fact, it's the knowledge that society is providing these services that motivates many people to take a chance and go without. So people are getting something, insurance services, in any case and those services are present whether or not you get sick. Just like fire insurance, the presence of insurance coverage has value to households even if they never use it. All society is doing with a mandate is asking people to pay for the health insurance services they receive rather than relying on others to pay the bill for them.
All in the Family: The Close Connection Between Nominal-GDP Targeting and the Taylor Rule, by Evan F. Koenig: Abstract: The classic Taylor rule for adjusting the stance of monetary policy is formally a special case of nominal- gross-domestic-product (GDP) targeting. Suitably implemented, moreover, nominal-GDP targeting satisfies the definition of a "flexible inflation targeting" policy rule. However, nominal-GDP targeting would require more discipline from policymakers than some analysts think is realistic.
I've been asking this for some time now, but I viewed nominal GDP targeting as a special case of the Taylor rule (when the coefficients are set just right), but it's the other way around -- the Taylor rule is the special case:
Note that the Taylor rule is a special case of nominal-GDP targeting... The chief difference between the two policy approaches is that under nominal-GDP targeting, policymakers look at a longer history of price changes than they do under the Taylor rule when deciding on the appropriate policy setting. Secondarily, the estimate of potential output that enters the nominal-GDP-targeting rule is less sensitive to short-term supply shocks than is the estimate that enters the Taylor rule.
The last point about temporary supply shocks is important as I tried to emphasize here (the post talks about why policymakers should not respond to temporary supply shocks under a Taylor rule, I didn't mention nominal GDP targeting as a solution).
Finally:
One might think that nominal-GDP targeting's ability to work around the zero-bound constraint would appeal to monetary policy doves and that its tighter control of inflation expectations would appeal to monetary policy hawks. Why hasn't nominal-GDP targeting received more widespread support? The main issue is credibility.[14] Some analysts are concerned that future FOMCs may fail to follow through on promises of accommodation, while others fear that future FOMCs may back away from nominal-GDP targeting should it call for tighter policy than the current approach. To the extent that the public shares the former concern, an announced shift to nominal-GDP targeting would do little to accelerate the economy's recovery. To the extent that the public shares the latter concern, an announced shift to nominal-GDP targeting might be seen as a relaxation of the Federal Reserve's commitment to price stability rather than an enhancement to that commitment.
This is a post I did awhile back for CBS (Nov. 2009):
COMMENTARY There's a similarity between used cars and health care. And once you understand the economics of used cars, you may look at health care in a new light.
Let's start with used cars. "The Market for Lemons" by George Akerlof is a famous paper in economics demonstrating how markets can break down when buyers and sellers are differentially informed. For example, suppose that there are 1,001 used cars worth from $0 to $1,000, i.e. one car is worth $0, one is worth $1, the next is worth $2, and so on up to a car valued at $1,000. Assume that the car owners can assess the value of the cars they are selling accurately, but buyers can't discern any difference in quality from examining the cars. That is, sellers are better informed than buyers about the car's quality.
In such a market, a buyer would expect to receive a car of average quality, and the price would settle at $500 (the exact price doesn't matter, all that's required is that the market sets some price below $1,000). But at a price of $500, all the sellers with cars valued from $501 to $1,000 would withdraw their cars from the market since the price of $500 is less than their cars are worth.
At this point, the only cars left on the market are valued between $0 and $500, and with buyers once again expecting to receive a car of average quality, the price would fall to $250. At this price, all the people with cars valued from $251 to $500 would take their cars off the market, and the cars left on the market would now be valued between $0 and $250.
The process repeats itself, the price drops to $125, more cars drop out, and this continues until there is just one car on the market selling for $0. That is, the market for used cars breaks down.
The technical term for this is an "adverse selection" problem, and there are many ways to solve it. The buyer can hire a mechanic to determine the value of a car before the purchase; the sellers can offer insurance against the car breaking down; the sellers might have a desire to maintain a reputation for quality (dealers selling cars that fall apart shortly after purchase will lose their reputations and go out of business), and so forth.
What does this have to do with health care? The adverse selection problem is one of the reasons we need an individual mandate for health care insurance (i.e. a requirement that everyone must purchase insurance that is part of the proposed health care reform package).
To explain how the adverse selection problem arises in these markets, note that people purchasing health insurance generally have better information about their health status than the people selling the insurance. If insurance is offered in this market at somewhere near the average cost of care for the group, people will use the superior information they have about their own health status to determine if this is a good deal for them. All of the people expecting to pay less for health care than the price the companies are asking for the insurance will drop out of the market (the young and healthy for the most part; all that is actually needed is that some people are willing to take a chance and go without insurance). With the relatively healthy people dropping out of the insurance pool, the price of insurance must go up, and when it does, more people drop out, the price goes up again, and the result is just like in the used car example above: The market breaks down and nobody (or hardly anybody) can purchase insurance.
But since we do not want people financially ruined or unable to get care when they are struck with a costly health problem, we need health insurance, and that insurance must be distributed over a wide variety of people so that the average cost of care will be affordable. One way to ensure that the pool is broad-based is to require that anyone who might need health care -- i.e. everyone -- purchase health insurance. (For a further discussion of these issues, see here.) In the past, the broad-based pools needed to make insurance work were obtained through a large tax break to induce firms to provide insurance to their employees, combined with a requirement that if the insurance is offered, it must be available to all employees. But the steady erosion in the employer-based system is one of the motivations for reforming the health care system.
Without an individual mandate, the health insurance market is likely to break down due to the adverse selection problem, but such a mandate can place a considerable burden on some households. Thus, while the individual mandate is necessary to make these markets work, it is also necessary to provide subsides to lower and middle class households who wouldn't be able to purchase the insurance without such help.
First, some facts. (i) You can only purchase Harry Potter books from Pottermore. Go to Amazon ... and you are directed to the Pottermore site. You then go through a process of linking your Amazon account but then can download the book straight on to your device.
(ii) You purchase once and you can get the book on any device. And I mean any. Kindle, iPad (through iBooks), Google Play (whatever that is) and Sony who appear to have provided the technical grunt to get this working. There is no other major book that is available this way. Actually, probably no other paid book available this way.
(iii) What about DRM? That is hard to parse. Here is what I know. I ... downloaded ... direct to my computer (in ePub format) and it appears that with that version I can put it on as many readers as I like. The site says I am limited to 8 downloads but once I have that ePub version there does not seem to be any limits.
So what does this mean? The whole concern over eBooks was potential device lock-in. We are worried about being tied to Amazon or Apple or what have you forever. This same thing is preventing entry or inroads by others such as Sony. The Rowling initiative breaks through all of that. It is device independent for the first time in this industry. One can only imagine the negotiations that occurred that allowed that to be possible — particularly with Amazon. Also, I can’t imagine that Amazon or Apple are getting their 30% cuts in this deal but let’s wait and see on that.
The point is that once one author ... can prove all this possible, there is the potential for floodgates to be opened. It will be interesting to see where this leads.
I wrote this before Bernanke's speech on the labor market on Monday. He says, echoing the topic of the column:
Is the current high level of long-term unemployment primarily the result of cyclical factors, such as insufficient aggregate demand, or of structural changes, such as a worsening mismatch between workers' skills and employers' requirements? ... I will argue today that ... the continued weakness in aggregate demand is likely the predominant factor.
So maybe the structural impediment, inflation hawk types at the Fed will be vanquished after all. We shall see. [See Tim Duy's comments on as well.]
Bernanke, Bullard, and QE3, by Tim Duy: This morning Federal Reserve Chairman Ben Bernanke gave a speech that apparently was identified as proof that QE3 is still in the cards. He argues that while labor markets have shown improvement in recent months, conditions are far from normal. Moreover, he sees the problem of long-term unemployment as largely structural, and delivers what many believed to be the money quote:
I will argue today that, while both cyclical and structural forces have doubtless contributed to the increase in long-term unemployment, the continued weakness in aggregate demand is likely the predominant factor. Consequently, the Federal Reserve's accommodative monetary policies, by providing support for demand and for the recovery, should help, over time, to reduce long-term unemployment as well.
In my opinion, to interpret this as a call for additional quantitative easing is a bit of a stretch. It sounds like simply a confirmation that Bernanke believes the current policy stance is appropriate and that the existence of long-term unemployment should not be viewed as a reason to believe that we are closing in on a resource constraint that would necessitate a tightening of the policy stance. I was drawn to a much more nuanced section:
However, to the extent that the decline in the unemployment rate since last summer has brought unemployment back more into line with the level of aggregate demand, then further significant improvements in unemployment will likely require faster economic growth than we experienced during the past year. It will be especially important to evaluate incoming information to assess whether the recovery is picking up as improvements in the labor market feed through to consumer and business confidence; or, conversely, whether the headwinds that have impeded the recovery to date continue to restrain the pace at which the labor market and economic activity normalize.
In essence, Bernanke suggests that the recent rapid improvements in unemployment reflect largely a reversal of out-sized deterioration experienced during the recession. As such, we should not expect a slower pace of improvement given current growth forecasts. Under such conditions, I believe, Bernanke would push for another round of QE - although it stills begs the question of why he doesn't push for more now given the existing forecasts. But he hasn't, so we can only infer that he thinks the costs of additional easing outweigh the benefits.
He leaves open the possibility, however, that labor markets will continue to improve at the recent pace, in which I think QE3 is off the table. And that is where Federal Reserve President James Bullard steps in to the picture. He said pretty much the same thing in a CNBC interview:
"I think QE3 would require the economy to deteriorate somewhat from where it is right now," Bullard said. "The basic story on the U.S. economy is that we've had good news over the last six months or so, especially compared to the recession scenario that was being painted in the August-September time period of last year."
But now, with the Committee on pause, it may be a good time to take stock of whether we may be at a turning point. Many of the further policy actions the Committee might consider at this juncture would have effects extending out for several years. As the U.S. economy continues to rebound and repair, those policy actions may create an overcommitment to ultra-easy monetary policy. The ultra-easy policy has been appropriate until now, but it will not always be appropriate.
The FOMC has often been criticized historically for overstaying policy stances that might have made sense at one juncture but are no longer appropriate as macroeconomic conditions change. This occurs in part because of the lags in the effects of policy, the difficulty in interpreting real-time data, much of which is subsequently revised, and the sheer uncertainty of macroeconomic developments. With numerous monetary policy actions still on the table, and others still affecting the economy with a lag, it may be especially difficult to remove policy accommodation at the appropriate pace and at the appropriate time. One may want to approach such a situation with caution.
This seems to suggest he is in fact entertaining the possibility that the turning point for policy will occur sooner than expected. My view is that the crux of any disagreement between Bullard and Bernanke is the timing of any tightening. Both would push for additional easing should conditions deteriorate. But Bernanke is willing to leave existing policy in place for well into the future, whereas Bullard is looking forward to pulling the trigger on tighter policy sooner than later.
This, by the way, is a debate Bernanke would win in the absence of clear indications that tightening is necessary.
Incredibly, in his CNBC interview, Bullard strays into the world of Japanese monetary policy:
"I think one of the biggest mistakes is continue to throw us much more in the way of monetary injections into the economy and with that, you get a much higher increase in commodity prices and potentially produce less global consumption across the world, which slows economic activity down," Bullard said. "I'm afraid that's the real danger just now - that we've maintained too loose of a policy right across the global economy and what results is inflation and reduction in real spending power."
I get this, I really do. But I have come to the conclusion that the Federal Reserve should not consider the reaction functions of other central banks when setting policy. Simply put, this is not the Fed's problem. To the extent that other nations import the Fed's monetary policy, they do so by choice. Bullard continues:
Bullard says he would like to see the Federal Reserve resume a "more normal monetary policy as soon as possible" because it has detrimental effects on the economy.
"It (the policy) punishes savers, for instance, in the economy, it does send a pessimistic signal about the economy and I think that can hurt investment prospects in the U.S.," Bullard said. "But we need to provide the right amount of support for the recovery as we do that, and we need to keep an eye on inflation."
Bottom Line: I do not think Bernanke's speech is a signal that QE3 is guaranteed. But it is a signal that QE3 is definitely not off the table. It is entirely data dependent. The current flow of data does not support additional action. I don't think it would take much of a deterioration to prompt additional action, so if you have a bearish view of the US economy, expect QE3. But if you have a bullish view, don't expect a rapid policy reversal. Bernanke isn't ready to go there.
The skeptics have decided that evidence isn't really evidence -- it's a grand conspiracy of thousands to fool the public -- so no amount of evidence will matter. Nevertheless, this is worth noting:
Global Warming Close to Becoming Irreversible, by Nina Chestney, Scientific American: The world is close to reaching tipping points that will make it irreversibly hotter ... scientists warned on Monday. ... As emissions grow,... the world is close to reaching thresholds beyond which the effects on the global climate will be irreversible ...
For ice sheets - huge refrigerators that slow down the warming of the planet - the tipping point has probably already been passed...
Most climate estimates agree the Amazon rainforest will get drier as the planet warms. Mass tree deaths caused by drought have raised fears it is on the verge of a tipping point, when it will stop absorbing emissions and add to them instead. Around 1.6 billion tons of carbon were lost in 2005 from the rainforest and 2.2 billion tons in 2010, which has undone about 10 years of carbon sink activity...
One of the most worrying and unknown thresholds is the Siberian permafrost, which stores frozen carbon in the soil away from the atmosphere. ... In a worst case scenario, 30 to 63 billion tons of carbon a year could be released by 2040, rising to 232 to 380 billion tons by 2100. This compares to around 10 billion tons of CO2 released by fossil fuel use each year.
Increased CO2 in the atmosphere has also turned oceans more acidic as they absorb it. In the past 200 years, ocean acidification has happened at a speed not seen for around 60 million years...
Markets or shareholders?, by Niraj Dawar, INSEAD: There is a fine line between professing free-market capitalism and teaching the subversion of those markets that is crossed in business-school classrooms every day. ...
We like to believe in the ideal of free markets because competition, we are convinced, is good for the economy. Competition forces sellers to keep the interests of the buyers at the heart of what they do; competition marginalizes and eliminates inefficient players; and competition for customers and resources spurs innovation... In short, these ideal markets lead to an efficient allocation of the economy’s resources, making us all better off in the long term.
If there is one principle that informs business school curricula, it is the belief in the efficiency and inherent goodness of free markets.
But there is another principle that contends for the title, and that is the belief that the goal of a business organization is the maximization of shareholder value. ... This is a worthy goal... Businesses that aim to maximize shareholder value in competitive markets will use the economy’s resources efficiently.
In a real economy – one that is not your textbook picture-perfect market – the maximization of shareholder value is most efficiently achieved by exploiting market imperfections..., companies get into the business of creating and maintaining regulatory wrinkles so that they can continue to exploit them... Firms that push for government protection in the form of trade barriers, longer patent life, or more global application of patents are attempting to keep competitors out. This type of lobbying for protection and favorable regulation undermines markets in many industries in many countries, including telecoms, banking, airlines, energy, infrastructure, pharmaceuticals, etc. ...
And business schools often end up supporting the erection of regulatory barriers to entry. In other words, at the same time as we profess a reverence for the markets, we’re teaching the subversion of freer markets. ...
Restoring society’s eroding faith in capitalism is not something that will happen overnight. Alleviating popular skepticism of business schools and their graduates may take even longer. But a good place for business schools to start is with some soul searching about where their allegiance resides: with efficient markets in the service of society, or with the creation of market inefficiencies in the service of oligopolies?
(Amusing as it may be to watch, the theater of having MBAs take oaths and participate in ring ceremonies is not going to restore society’s faith in business schools).
This problem won't be solved from within, i.e. by hoping that businesses will suddenly drop behaviors that lead to increased profits. It's the institutions surrounding markets that must adjust.
I did my best to defend New Keynesianism against the New Monetarist assault, but it's lonely being a New Keynesian in St. Louis (as Randy Wright let me know at every opportunity). But it was a fun visit -- thanks David!:
Bloggers in St. Louis, by David Andolfatto: Another eventful week at work (last week). Two coauthors in town (Fabrizio Mattesini and Randy Wright) and three seminars (Nicolas Trachter, Mario Crucini, and Fatih Guvenen). Well, four seminars, I guess. Narayana Kocherlokota was in town to deliver the Hyman P. Minsky lecture at Washington University. Oh, and Mark Thoma also gave an interesting seminar on how bloggers have helped (and harmed) the nature of economic discussions/debates. Fascinating stuff all around.
Mark was visiting the St. Louis Fed all last week (at my invitation). Of course, I knew that keeping him away from Steve Williamson was going to be a problem. And I was right. Here they are at the Kocherlakota event, with me trying to break up their fight:
My two favorite bloggers coming to blows
Things calmed down after I agreed to buy them both a beer; see here:
The increasing "corporatization of our political life" has far reaching consequences:
Lobbyists, Guns and Money, by Paul Krugman, Commentary, NY Times: Florida’s now-infamous Stand Your Ground law, which lets you shoot someone you consider threatening..., sounds crazy — and it is. And it’s tempting to dismiss this law as the work of ignorant yahoos. But similar laws have been pushed across the nation, not by ignorant yahoos but by big corporations.
Specifically, language virtually identical to Florida’s law is featured in a template supplied to legislators in other states by the American Legislative Exchange Council...
What is ALEC? Despite claims that it’s nonpartisan, it’s very much a movement-conservative organization, funded by the usual suspects: the Kochs, Exxon Mobil, and so on. Unlike other such groups, however, it doesn’t just influence laws, it literally writes them, supplying fully drafted bills to state legislators. ...
Many ALEC-drafted bills pursue standard conservative goals: union-busting, undermining environmental protection, tax breaks for corporations and the wealthy. ALEC seems, however, to have a special interest in privatization — ...turning ... public services, from schools to prisons, over to for-profit corporations. And some of the most prominent beneficiaries of privatization ... are ... very much involved with the organization.
What this tells us ... is that ALEC’s claim to stand for limited government ... is deeply misleading. To a large extent the organization seeks not limited government but privatized government, in which corporations get their profits from ... taxpayer dollars steered their way by friendly politicians. In short, ALEC ... is about expanding crony capitalism. ...
Yet that’s not all; you have to think about the interests of the penal-industrial complex — prison operators, bail-bond companies and more. ... This complex has a financial stake in anything that sends more people into the courts and the prisons, whether it’s exaggerated fear of racial minorities or Arizona’s draconian immigration law, a law that followed an ALEC template...
Think about that: we seem to be turning into a country where crony capitalism doesn’t just waste taxpayer money but warps criminal justice, in which growing incarceration reflects not the need to protect law-abiding citizens but the profits corporations can reap from a larger prison population.
Now, ALEC isn’t single-handedly responsible for the corporatization of our political life... But shining a light on ALEC and its supporters — a roster that includes many companies, from AT&T and Coca-Cola to UPS, that have so far managed to avoid being publicly associated with the hard-right agenda — is one good way to highlight what’s going on. And that kind of knowledge is what we need to start taking our country back.
Mr Shirakawa’s first point is that loose monetary policy mitigates the pain as households repair their balance sheets, but reduces their incentive to do so quickly, not just for the private sector but for governments as well. However, he also suggests that the effectiveness of loose policy may fall over time as households that weren’t damaged by the crisis bring forward such spending as they want to.
The first sentence sounds like a rehashing of the "liquidationist" approach. We should let the economy collapse rather than provide support during balance sheet adjustment. The second part suggests that there is only so much spending that can be brought forward via low interest rates. But I think this is not really a novel idea, as we pretty much know that the effectiveness of monetary policy fades at the zero bound:
In this case, I drew the LM curve as a (dotted) horizontal line at a positive zero interest rate, suggesting an economy at the zero nominal bound with deflation. Yes, the effectiveness of low interest rates waned as the zero bound was a approached. At this point, if the BoJ wanted to induce additional spending, they would need to make a credible commitment to a higher inflation target. In other words, the effectiveness of monetary policy did not fade unexpectedly - it is exactly what you would expect given the zero bound problem.
The second concern is that a low interest rate environment is hurting potential growth. This time, Shirakawa:
“If low interest rates induce investment projects that are only profitable at such interest rate levels, this could have an adverse impact on productivity and growth potential of the economy by making resource allocation inefficient. While central banks have typically conducted monetary policy by treating a potential growth rate as exogenously given, when the economy is under prolonged shocks arising from balance-sheet repair, we may have to take into account the risk that a continuation of low interest rates will affect the productivity of the overall economy and lower the potential growth rate endogenously.”
I don't think this makes any sense at all (neither does Harding). I could see a problem if low productivity projects are funded instead of high productivity projects, but presumably the latter would still be funded first in any event. In other words, I don't see that the low interest rate environment by itself would alter the composition of investment. And if we cut off funding for the less productive investments, the capital stock would grow more slowly, and that would certainly reduce potential growth. And note that this is separate from the worry that government investment is displacing private investment. Government investment is compensating for the lack of private investment. If anything, Japan needs lower real interest rates to support higher levels of private investment to lessen its dependence on fiscal deficits. And that too is another result of the zero bound problem.
The third concern is aptly handled by Harding:
Point three is the more standard argument that flattening the yield curve too far for too long will undermine the profitability of the financial sector.
Again, while I am sympathetic to the financial market consequences of low interest rate environments, the central bank is really just following the economy down. In the absence of sufficient economic activity to pull longer term rates up, if the Bank of Japan raised rates they would simply be inverting the yield curve - and I don't see that as positive for the financial sector.
The final concern is almost laughable:
“Even though such a rise in commodity prices is affected by globally accommodative monetary conditions, individual central banks recognise that the fluctuation in commodity prices is an exogenous supply shock and focus on core inflation rates which exclude the prices of energy-related items and food. The resulting reluctance of individual central banks to counter rising commodity prices, when aggregated globally, could further boost these prices. From a global perspective, such a situation represents nothing more than a case where a hypothetical “World Central Bank” fails to satisfy the Taylor principle, which ensures the stability of global headline inflation. While it is understandable that the central banks would pursue the stability of their own economies in the conduct of monetary policy, it is increasingly important to take into account the international spillovers and feedback effects on their own economies.”
First, Shirakawa is making the error of thinking the Federal Reserve targets core inflation. They do not, and they have made that clear. They target headline inflation, but use core-inflation as a guide as to the direction of headline inflation. Shirakawa implies that while core-inflation is tame, headline is running wild - and that simply is not true:
The path of headline inflation is actually on a lower trend compared to before the recession. Moreover, please explain how headline inflation is causing such a problem for the Bank of Japan:
Finally, as Harding notes, if you don't like US monetary policy, don't import it.
Bottom Line: I would be cautious about taking lessons from Shirakawa. The concerns about the low interest rate environment beg the still unanswered question: What is the alternative for monetary policy? To hike short term interest rates? It sounds like Shirakawas "concerns" are more excuses for an ongoing monetary policy failure on the part of the Bank of Japan.
We were challenged on the proper estimate of the multiplier μ and challenged (quite rightly) on our guesses of the hysteresis parameter η, the share of a current downturn that is the shadow cast on future potential output. We were challenged by those saying that America's debt capacity should not be used now but should be kept ready and dry to be used in some future crisis in which using it could do much more good than it would now. We were challenged by those who think that the U.S. is on the edge of losing not just its exorbitant privilege that allows it to borrow enormous amounts at negative real interest rates but is on the point of seeing a complete revolution in interest rates that will push the rates at which the Treasury can borrow up into high single or double digits.
But on our basic arithmetic we were not challenged: it is that fiscal policy in a depressed economy is self-financing as long as:
where r is the real Treasury borrowing rate--take the nominal Treasury borrowing rates and subtract 2%--g is the growth rate, which is 2.5%; τ is the fraction of GDP that shows up as increased tax revenues and reduced social-insurance transfers, which is 1/3; μ is the (debatable) standard Keynesian multiplier when monetary policy is at the zero lower bound; and η is the (largely unknown) hysteresis shadow a long, deep depression casts on future potential output.
As we, at least see it, it is possible to be highly confident that the depressed-economy standard Keynesian multiplier when monetary policy is at the zero lower bound μ is greater than 1.0, and substantially confident that the hysteresis shadow η cast by a long, deep depression is greater than 0.05.
The arithmetic means that over the past four years fiscal policy has been self-financing, and would be self-financing unless the real Treasury borrowing rate is rapidly going to become greater than 5%--unless the nominal Treasury rate is rapidly going to burst through 7% heading upwards. It has simply never been at such levels seven for a short span of years during and immediately after the Volcker disinflation.
Thus even if you think that the United States is on the edge of some kind of fiscal crisis and may be about to transit from a low interest rate "confidence in government" to a high interest rate "panic" equilibrium, increases in debt-service loads relative to GDP from fiscal austerity are more likely to shock the system into the bad equilibrium than are policies of fiscal expansion which over the past four years would--exceptionally and extraordinarily--have, for once, by the arithmetic, paid for themselves.
What caused the financial crisis, low interest rate policies of central banks or global imbalances, i.e. the savings glut? This research finds that central banks are not to blame:
The global financial crisis – What caused the build-up?, by Erlend W Nier and Ouarda Merrouche, Vox EU: With all eyes currently on the latest twists being played out in the Eurozone, the global financial crisis, now in its fifth year, appears alive and well. But there is still little agreement on the underlying macroeconomic causes of the build-up of financial imbalances that unwound so dramatically since the summer of the 2007.
Some blame central bankers for keeping policy rates ‘too low for too long’ in the early part of the last decade (eg Taylor 2007 and White 2009). According to this view, in the US, the Federal Reserve had cut rates sharply in response to the collapse of the stock market boom of the 1990s and then kept rates low into 2004, thereby sowing the seeds of the subsequent boom and bust. The European version of this argument is that low real short-term rates implied by the ECB’s one-size-fits-all monetary policy fuelled rapid credit extension and house-price bubbles in peripheral countries, such as Ireland and Spain, that would have benefitted from tighter monetary policy (egAhearne et al 2008).
Others (eg Bernanke 2010 and King 2010) point to rapidly increasing global financial flows in the early part of the decade, which resulted in part from growing current-account imbalances (‘global imbalances’). According to this explanation, a number of countries, including the US, Spain, Ireland, Portugal, and Greece ran unusually wide current-account deficits that were funded by the surplus countries (emerging Asia and Germany), thereby setting off strong cross-border capital flows (Suominen 2010). Proponents of this view argue that it was these flows that led to falling long-term yields in recipient countries, stimulating demand for credit in these countries even as policymakers started to tighten short-term rates.
Who is right? As a first pass, Figures 1–3 document:
Policy rates that hovered below Taylor rates for much of the early part of the decade, on average for the OECD,
The increasing dispersion of current-account imbalances, and
The striking compression of the spread between long rates and short rates, across the OECD, especially since 2004.
Figure 1. Average OECD country monetary-policy stance (1999–2007)
Source: Ahrend et al (2008).
Note: This chart plots the average monetary-policy stance across OECD countries during 1999–2007. The monetary-policy stance is measured as the policy rate deviation from the Taylor rule benchmark.
Figure 2. Dispersion of current-account balances among OECD countries (1999–2007)
Source: Authors’ calculations using the IMF IFS statistics.
Note: This chart plots the cross-sectional dispersion of current accounts across OECD countries, as measured by its standard deviation.
Figure 3. Average long-term short-term spread, OECD countries (1999–2007)
Source: Authors’ calculations from the OECD SourceDatabase.
Note: This chart plots the average long-term short-term spread across OECD countries. The long-term rate is the ten-year government bond rate. The short-term rate is the three-month rate.
While these charts are suggestive that both explanations might have merit, more rigorous examination is needed to clarify the extent to which monetary policy on the one hand and growing global imbalances on the other might have played a role in fuelling the build-up of financial sector imbalances ahead of the crisis.
Our research (Merrouche and Nier 2011) does just that. Specifically, it examines whether differences in the time path of these variables across countries result in differences in the time path of various measures of financial imbalances in OECD countries over the 1999–2007 period.
The most comprehensive to date, our analysis employs a range of empirical measures to ensure the robustness of our findings.
Our main, and standard, measure of the monetary-policy stance is the deviation of the policy rate from the rate suggested by a contemporaneous Taylor rule. In addition, we look at the length of time that the policy rate has stayed below the Taylor rate, to capture the phenomenon of rates that were ‘too low for too long’, as well as simpler measures, such as the real short-term rate.
Our primary measure of the global imbalances hypothesis is a country’s current account as a share of GDP. This measure is used as a proxy for net capital flows, where a current-account deficit corresponds to net capital inflows. We also employ the spread between long rates and short rates as an alternative measure. Indeed we show empirically that (lagged) capital inflows are the main driver of this spread.
Since there is no universally accepted empirical measure of ‘financial imbalances’, we employ a range of measures. First, we use the ratio of banking-sector credit to core deposits. This captures well an increased reliance by intermediaries on volatile wholesale funding to expand their balance sheet ahead of the crisis, as documented in Figure 4. Moreover, recent independent research by Caprio et al documents that the ratio of credit to deposits is highly predictive of whether a country suffered a systemic crisis in 2008. In addition, we use a range of alternative indicators, including the ratio of credit to GDP, the ratio of household debt to GDP, and house-price appreciation.
Figure 4. Ratio of bank credit to bank deposits, average across OECD countries (1999–2007)
Source: Authors’ calculations from the World Bank Financial Development and Structure Database (2008).
Note: This chart plots the average ratio of bank credit to deposits across OECD countries.
Finally, since macroeconomic developments are likely to have interacted with weak supervision of the financial sector ahead of the crisis, we employ measures of the strength of supervision and regulation, sourced from a World Bank database. This allows for a powerful test, since we would expect a strong interaction between the strength of supervision and potential macroeconomic drivers only where the macroeconomic factor was causal for the build-up.
This empirical analysis yields the following key results:
We find that cross-country differences in net capital inflows can account for differences between countries in the build-up of financial imbalances, as measured by the ratio of banking-sector credit to core deposits. By contrast we do not find that differences in the monetary-policy stance had an effect on the build-up of financial imbalances when capital flows are accounted for.
We further document that the compression of the spread between long rates and short rates was an important mechanism through which rising global imbalances had an effect on the balance-sheet expansion sourced in wholesale funding markets. Where long rates declined more relative to short rates banks appear to have taken on extra leverage.
The strength of the build-up was not driven by macroeconomic factors alone. It was less pronounced where supervisory and resolution powers were relatively strong. Indeed we find that strong supervisory powers have tended to dampen the effect of capital inflows and falling long-term rates on the expansion of bank balance sheets ahead of the crisis.
Our main findings on the relative importance of external imbalances versus monetary policy carry through when we employ alternative measures of financial imbalances, including the ratio of credit to GDP, the ratio of household debt to GDP, and the increase in house prices over the period. In each case, the strength of net capital inflows, rather than the monetary-policy stance, re-emerges as the key determinant of differences in the growth of financial imbalances across OECD countries over the pre-crisis period.
Overall, our findings lend strong support to the conjecture that “[c]apital flows provided the fuel which the developed world’s inadequately designed and regulated financial system then ignited to produce the firestorm that engulfed us all” (King 2010).
Our findings also have important implications for policy. They underscore the importance of a concerted effort to encourage a rebalancing of the global economy, as is being discussed by the G20. They also point to the need for stronger prudential control of the financial system. In particular, inadequate prudential policies failed to address systemic vulnerabilities associated with excessive use of wholesale funding. Finally, we document that the period ahead the crisis coincided with low policy rates globally. However, we provide comprehensive evidence that sizable differences in the path of monetary policy across countries did not appear to affect the strength of the build-up of financial imbalances. This cautions against a major re-orientation of monetary-policy frameworks in response to the crisis.
Bernanke, Ben S (2010), “Monetary Policy and the Housing Bubble”, Remarks at the Annual Meeting of the American Economic Association, 3 January, Atlanta, Georgia.
Taylor, John B (2007), “Housing and Monetary Policy”, Federal Reserve Bank of Kansas City, 2007 Symposium.
White, William R (2009), “Should Monetary Policy “Lean or Clean?”, Globalization and Monetary Policy Institute Working Paper, Federal Reserve Bank of Dallas.
...The target of all versions of fair trade is “free trade,” and the most damaging attacks on FAIRTRADE have come from free traders. In Unfair Trade, a pamphlet published in 2008 by the Adam Smith Institute, Mark Sidwell argues that FAIRTRADE keeps uncompetitive farmers on the land, holding back diversification and mechanization. According to Sidwell, the FAIRTRADE scheme turns developing countries into low-profit, labor-intensive agrarian ghettos, denying future generations the chance of a better life.
This is without considering the effect that FAIRTRADE has on the poorest people in these countries – not farmers but casual laborers – who are excluded from the scheme by its expensive regulations and labor standards. In other words, FAIRTRADE protects farmers against their rivals and against agricultural laborers.
Consumers, Sidwell argues, are also being duped. Only a tiny proportion – as little as 1% – of the premium that we pay for a FAIRTRADE chocolate bar will ever make it to cocoa producers. Nor is FAIRTRADE necessarily a guarantee of quality: because producers get a minimum price for fair-trade goods, they sell the best of their crop on the open market.
But, despite its shaky economics, the fair-trade movement should not be despised. While cynics say that its only achievement is to make consumers feel better about their purchases – rather like buying indulgences in the old Catholic Church – this is to sell fair trade short. In fact, the movement represents a spark of protest against mindless consumerism, grass-roots resistance against an impersonal logic, and an expression of communal activism.
That justification will not convince economists, who prefer a dryer sort of reasoning. But it is not out of place to remind ourselves that economists and bureaucrats need not always have things their own way.
Not sure how much time I'll have -- I'm traveling today and have to meet a deadline along the way -- so let me turn the conversation over to someone who might know a bit about this topic, Paul Krugman:
What Should Trade Negotiators Negotiate About? A Review Essay, by Paul Krugman: If economists ruled the world, there would be no need for a World Trade Organization. The economist's case for free trade is essentially a unilateral case - that is, it says that a country serves its own interests by pursuing free trade regardless of what other countries may do. Or as Frederic Bastiat put it, it makes no more sense to be protectionist because other countries have tariffs than it would to block up our harbors because other countries have rocky coasts. So if our theories really held sway, there would be no need for trade treaties: global free trade would emerge spontaneously from the unrestricted pursuit of national interest. (Students of international trade theory know that there is actually a theoretical caveat to this statement: large countries have an incentive to limit imports - and exports - to improve their terms of trade, even if it is in their collective interest to refrain from doing so. This "optimal tariff" argument, however, plays almost no role in real-world disputes over trade policy.)
Fortunately or unfortunately, however, the world is not ruled by economists. The compelling economic case for unilateral free trade carries hardly any weight among people who really matter. If we nonetheless have a fairly liberal world trading system, it is only because countries have been persuaded to open their markets in return for comparable market-opening on the part of their trading partners. Never mind that the "concessions" trade negotiators are so proud of wresting from other nations are almost always actions these nations should have taken in their own interest anyway; in practice countries seem willing to do themselves good only if others promise to do the same.
But in that case why should the tits we demand in return for our tats consist only of trade liberalization? Why not demand that other countries match us, not only in what they do at the border, but in internal policies? This question has been asked with increasing force in the last few years. In particular, environmental advocates and supporters of the labor movement have sought with growing intensity to expand the obligations of WTO members beyond the conventional rules on trade policy, making adherence to international environmental and labor standards part of the required package; meanwhile, business groups have sought to require a "level playing field" in terms of competition policy and domestic taxation. Depending on your point of view, the idea that there must be global harmonization of standards on employment, environment, and taxation is either the logical next step in global trade negotiations or a dangerous overstepping of boundaries that threatens to undermine all the progress we have made so far.
In 1992 Columbia's Jagdish Bhagwati (one of the world's leading international trade economists) and Robert E. Hudec (an experienced trade lawyer and former official now teaching at Minnesota) brought together an impressive group of legal and economic experts in a three-year research project intended to address the new demands for an enlarged scope of trade negotiations. Fair Trade and Harmonization: Prerequisites for Free Trade? (Cambridge MA: MIT Press, 1996) is the result of that project. This massive two-volume collection of papers is unavoidably a bit repetitious. One also wonders why only economists and lawyers were involved - what happened to the political scientists? (More on that later). But the volumes contain a number of first-rate papers and offer a valuable overview of the debate.
In this essay I will not try to offer a comprehensive review of the papers; in particular I will give short shrift to those on competition and tax policy. Nor will I try to deal with the quite different question of how much coordination of technical standards - e.g. health regulations on food (remember the Eurosausage!), or safety regulations on consumer durables - is essential if countries are to achieve "deep integration". Instead, I will try to sort through what seem to be the main issues raised by new demands for international labor and environmental standards..
The economics and politics of free trade
In a way, the most interesting paper in the Bhagwati-Hudec volumes is interesting precisely because the author seems not to understand the logic of the economic case for free trade - and in his incomprehension reveals the dilemmas that practical free traders face. Brian Alexander Langille, a Canadian lawyer, points out correctly that domestic policies such as subsidies and regulations may influence a country's international trade just as surely as explicit trade policies such as tariffs and import quotas. Why then, he asks, should trade negotiations stop with policies explicitly applied at the border? He seems to view this as a deep problem with economic theory, referring repeatedly to the "rabbit hole" into which free traders have fallen.
But the problem free traders face is not that their theory has dropped them into Wonderland, but that political pragmatism requires them to imagine themselves on the wrong side of the looking glass. There is no inconsistency or ambiguity in the economic case for free trade; but policy-oriented economists must deal with a world that does not understand or accept that case. Anyone who has tried to make sense of international trade negotiations eventually realizes that they can only be understood by realizing that they are a game scored according to mercantilist rules, in which an increase in exports - no matter how expensive to produce in terms of other opportunities foregone - is a victory, and an increase in imports - no matter how many resources it releases for other uses - is a defeat. The implicit mercantilist theory that underlies trade negotiations does not make sense on any level, indeed is inconsistent with simple adding-up constraints; but it nonetheless governs actual policy. The economist who wants to influence that policy, as opposed to merely jeering at its foolishness, must not forget that the economic theory underlying trade negotiations is nonsense - but he must also be willing to think as the negotiators think, accepting for the sake of argument their view of the world.
What Langille fails to understand, then, is that serious free-traders have never accepted as valid economics the demand that our trade liberalization be matched by comparable market-opening abroad; and so they are not being inconsistent in rejecting demands for an extension of such reciprocity to domestic standards. If economists are sometimes indulgent toward the mercantilist language of trade negotiations, it is not because they have accepted its intellectual legitimacy but either because they have grown weary of saying the obvious or because they have found that in practice this particular set of bad ideas has led to pretty good results.
One way to answer the demand for harmonization of standards, then, is to go back to basics. The fundamental logic of free trade can be stated a number of different ways, but one particularly useful version - the one that James Mill stated even before Ricardo - is to say that international trade is really just a production technique, a way to produce importables indirectly by first producing exportables, then exchanging them. There will be gains to be had from this technique as long as world relative prices differ from domestic opportunity costs - regardless of the source of that difference. That is, it does not matter from the point of view of the national gains from trade whether other countries have different relative prices because they have different resources, different technologies, different tastes, different labor laws, or different environmental standards. All that matters is that they be different - then we can gain from trading with them.
This way of looking at things, among its other virtues, offers an en passant refutation of the instinctive feeling of most non-economists that a country that imposes strong environmental or labor standards will necessarily experience difficulties when it trades with other countries that are not equally high-minded. The point is that all that matters for the gains from trade are the prices at which you trade - it makes absolutely no difference what forces lie behind those prices. Suppose your country has been cheerfully exporting airplanes and importing clothing in return, believing that the comparative advantage of your trading partners in clothing is "fairly" earned through exceptional productive efficiency. Then one day an investigative journalist, hot in pursuit of Kathie Lee Gifford, reveals that the clothing is actually produced in 60-cent-an-hour sweatshops that foul the local air and water. (If they hurt the global environment, say by damaging the ozone layer, that is another matter - but that is not the issue).You may be outraged; but the beneficial trade you thought you had yesterday has not become any less economically beneficial to your country now that you know that it is based on these objectionable practices. Perhaps you want to impose your standards on these matters, but this has nothing to do with trade per se - and there are worse things in the world than low wages and local pollution to excite our moral indignation.
This back-to-basics case for rejecting calls for harmonization of standards is elaborated in two of the papers in Volume 1 of Bhagwati-Hudec: a discussion of environmental standards by Bhagwati and T.N. Srinivasan, and a discussion of labor standards by Drusilla Brown, Alan Deardorff, and Robert Stern. In each case the central theme is that neither the ability of a country to impose such standards nor its benefits from so doing depend in any important way on whether other countries do the same; so why not leave countries free to choose?
Bhagwati and Srinivasan also raise two other arguments on behalf of a laissez-faire approach to standards, arguments echoed by several other authors in the volume. The first is that nations may legitimately have different ideas about what is a reasonable standard. (The authors quote one environmentalist who asserts that "geopolitical boundaries should not override the word of God who directed Noah" to preserve all species, then drily note that "as two Hindus .. we find this moral argument culture-specific"). Moreover, even nations that share the same values will typically choose different standards if they have different incomes: advanced-country standards for environmental quality and labor relations may look like expensive luxuries to a very poor nation. Second, to the extent that nations for whatever reason choose different environmental standards, this difference, like any difference in preferences, actually offers not a reason to shun international trade but an extra opportunity to gain from such trade. It is very difficult to be more explicit about this without being misrepresented as an enemy of the environment - an excerpt from the entirely sensible memo along these lines that Lawrence Summers signed but did not write at the World Bank a few years ago is reprinted in my copy of The 776 Stupidest Things Ever Said - so it is left as an exercise for readers.
The back-to-basics argument against harmonization of standards, then, is completely consistent and persuasive. And yet it is also somehow unsatisfying. Perhaps the problem is that we know all too well how little success economists have had in convincing policymakers of the case for unilateral free trade. Why, then, should we imagine that restating that case yet again will be an effective argument against the advocates of international harmonization of standards? Confronted with the failure of the public to buy the classical case for free trade, and unwilling simply to preach the truth to each other, trade economists have traditionally followed one of two paths. Some try to give the skeptics the benefit of the doubt, attempting to find coherent models that make sense of their concerns. Others try to make sense not of the skeptics' ideas but their motives, attempting to seek guidance from models of political economy. The same two paths are followed in these volumes, with several papers following each approach.
Second-best considerations and the "race to the bottom"
The general theory of the second best tells us that if incentives are distorted in some markets, and for some reason these distortions cannot be directly addressed, policies in other markets should in principle take the distortions into account. For example, environmental economists have become sensitized to the likely interactions between pollution fees - designed to correct one distortion of incentives - with other taxes, which have nothing to do with environmental issues but which, because they distort incentives to work, save, and invest may crucially affect the welfare evaluation of any given environmental policy.
There is a long history of protectionist arguments along second-best lines. (Among Jagdish Bhagwati's seminal contributions to international trade theory was, in fact, his work showing that many critiques of free trade are really second-best arguments - and that the first-best response rarely involves protection). Here's an easy one: suppose that an industry generates negative environmental externalities that are not properly priced, and that international trade leads to an expansion of that industry in your country. Then that trade may indeed reduce national welfare (although of course trade may equally well have the opposite effect: it may cause your country to move out of "dirty" into "clean" industries, and thereby lead to large welfare gains). However, the advocates of international environmental and labor standards seem to be offering a more subtle argument. They seem to be claiming that an environmental (or labor) policy that would raise welfare in a closed economy - or that would raise world welfare if implemented by all countries simultaneously - will reduce national welfare if implemented unilaterally. Thus the independent actions of national governments in the absence of international standards on these issues can lead to a "race to the bottom", with global standards far too lax.
What sort of model might justify this fear? In an extremely clear paper in Volume 1, John D. Wilson gives the issue his (second) best shot, showing that international competition for capital - in a world in which the social return to capital exceeds its private return, for example due to capital taxation - could do the trick. Other things being the same, tighter environmental or labor regulation will presumably decrease the rate of return on investments, and thus any country which has a pre-existing tendency to attract too little capital will have an incentive to avoid such regulations; whereas a collective, international decision to impose higher standards would not lead to capital flight, since the capital would have nowhere to go.
Is this a clinching argument? Not necessarily. For one thing, like all second-best arguments it is very sensitive to tweaking of its assumptions. As Wilson points out, capital importation may have adverse as well as positive effects, especially from the point of view of an environment-conscious country. In that case a positive rate of taxation is appropriate - and if the actual rate of taxation is too low, countries may adopt excessively strong environmental standards in a "race to the top". If this seems implausible, Wilson reminds us of the NIMBY (not in my backyard) phenomenon in which no local jurisdiction is willing to be the site for facilities the public collectively needs to locate somewhere.
Even if you regard a race to the bottom as more likely than one to the top, there is still the question of whether such second-best arguments are really very important. This is doubtful, especially where environmental standards are concerned. The alleged impact of such standards on firms' location decisions looms large in the demands of activists who want these standards harmonized. But the chapter by Arik Levinson, surveying the evidence, finds little reason to think that international differences in these standards actually have much effect on the global allocation of capital.
So while it is possible to devise second-best models that offer some justification for demands for harmonization of standards, these models - on the evidence of this collection, at any rate - do not seem particularly convincing. The classical case for laissez-faire on national economic policies is surely not precisely right, but it does not seem wrong enough to warrant the heat now being generated over the issue of harmonization. Simply pointing this out, however, while important, does not make the phenomenon go away. So it is at least equally important to try to understand the political impulse behind demands for harmonization, and in particular to ask whether the political economy of standard-setting offers some indirect rationale for insisting on harmonization of such standards.
The political economy of standards
Consider - as Brown, Deardorff, and Stern do - a single industry, small enough to be analyzed using partial equilibrium, in which a country is considering imposing a new environmental or labor regulation that will raise production costs. As they point out, if the costs of the regulation are less than the social costs imposed by the industry in its absence, then it is worth doing regardless of whether other countries follow suit. But the distribution of gains between producers and consumers does depend on whether the action is unilateral or coordinated. If one country imposes a costly regulation while others do not, the world price will remain unchanged and all of the burden will fall on producers; if many countries impose the regulation, world prices will rise and some of the burden will be shifted to consumers.
So what? Well, it is a fact of life, presumably rooted in the public-goods character of political action, that trade policy tends to place a much higher weight on producers than on consumers. So even though the national welfare case for the regulation is not weakened at all by the fact that the good is traded, the practical political calculus of getting the regulation implemented could quite possibly depend on whether other countries agree to do the same. This suggests an alternative version of the "race to the bottom" story. The problem, one might argue, is not that countries have an incentive to set standards too low in a trading world. Rather, it is that politicians, who respond to the demands of special-interest groups, have such an incentive. And one might argue that this failure of the political market, rather than distortions in goods or factor markets, is what justifies demands for international harmonization of standards.
An environmentalist or defender of workers' rights might also make a related argument. He or she might say "You know that countries aren't in a zero-sum competition, and I know that they aren't, but the public and the politicians think they are - and industry lobbies consistently use that misconception as an argument against standards that we ought to have. So we need to set those standards internationally in order to neutralize that bogus but effective political ploy". It is very difficult for trade economists to reject this line of argument on principle. After all, it is very close to the reason why free-traders who know that the economic case for liberal trade is essentially unilateral are nonetheless usually staunch defenders of the GATT: trade negotiations may be based on a false theory, but by setting exporters as counterweights to producers facing import competition they nonetheless are politically crucial to maintaining more or less free trade. That is, the true purpose of international negotiations is arguably not to protect us from unfair foreign competition, but to protect us from ourselves. (When the United States recently imposed utterly indefensible restrictions on Mexican tomato exports, an Administration official remarked off the record that Florida has a lot of electoral votes while Mexico has none. The economically correct rebuttal to this sort of thing is to point out that the other 49 states contain a lot of pizza lovers; the politically effective answer is to subject US-Mexican trade to a set of rules and arbitration procedures in which the Mexicans do too have a vote).
While one cannot dismiss such political-economy arguments as foolish, however, the problem is to know where to stop. Here is where it would have been useful to hear from some political scientists, who might be able to tell us more about when international negotiations over standards are likely to improve domestic policies, and when they are likely simply to serve as a cover for protectionist motives. But while I would have liked to see an analysis from that point of view, much of the legal analysis that occupies Volume 2 of the Bhagwati-Hudec books does shed light on the problem.
Standards and the rule of law
Economists pronounce on legal matters at their peril: law, even international trade law, is a discipline all its own, with a jargon just as impenetrable to us as ours is to them. Let me therefore tread cautiously in interpreting the arguments here. As I understand it, the problem involved in defining the limits of fair trade is not too different from that of defining the limits of free speech. Take it as a given that countries can do things that are perceived to be economically harmful to other countries - it does not necessarily matter whether this perception is correct. Which of these things can realistically be prohibited, and which should be tolerated? The answer is a matter of degree. The fellow at the next table who insists on talking loudly to his partner about marketing is annoying, but one cannot reasonably ask the law to do anything about him; the person who shouts "Fire" in a crowded theater is something else again.
Where does one draw the line in international economic relations? The prevailing principle of international law derives from the 17th-century Peace of Westphalia, which ended the Thirty Years' War by establishing the rule that states may do whatever they like (such as imposing the sovereign's religion) within their borders - only external relations are the proper concern of the international community. By this principle labor law, or environmental policies that do not spill across borders, should be off limits.
Now in practice we do not always honor the principle of the hard-shell Westphalian state. We are sometimes willing to impose sanctions or even invade to protect human rights. Even in trade negotiations it is an understood principle that if a country de facto undoes its trade concessions with domestic policies - for example, offsetting a tariff cut with an equal production subsidy - it is considered to have failed to honor its agreement. But while borders are fuzzier in legal practice than they are on a map, the basic structure of trade negotiations is still basically Westphalian. The demand for harmonization of standards is, in effect, a demand that this should change.
We have seen that the strictly economic case for that demand is fairly weak, but there may be a stronger case on grounds of political economy. But what do the legal experts say? The general answer, as I understand it, is that they don't think it is a good idea. A lucid chapter by Frieder Rousseler grants that the political argument for harmonization has some force, but concludes that to give in to it would open up too wide a range of potential complaints, much the same as would happen if I were allowed to sue people whose words annoy rather than actually slander me. Other authors, such as Virginia Leary and Robert Hudec himself, seem to have a similar point of view, suggesting only that nations might want to enter into specific environmental and labor agreements that would then be enforced by the same institutions that enforce trade agreements. (One essay, however, a piece by Daniel Gifford and Mitsuo Matsushita on competition policy, seems more economistic than the economists: it argues that the international acceptability of competition policies should be judged on whether they seem likely, or at least motivated by the desire, to enhance efficiency).
To an economist, at least, the legal case here seems fairly similar to the economic case for trade negotiations. We have a purist principle: unilateral free trade, the Westphalian state. We recognize based on experience that it is useful to compromise that principle a bit, so that we work with mercantilists rather than simply castigating them and allow a bit of international meddling in internal affairs. But while a bit of pragmatism is allowed, the principle remains there; and it is not a good idea to stray too far. On the evidence of these volumes, then, the demand for harmonization is by and large ill-founded both in economics and in law; realistic political economy requires that we give it some credence, but not too much. Unfortunately, that will surely not make the issue go away. Expect many more, equally massive volumes to come.
David Altig, research director at the Cleveland Fed, looks at how close labor markets are to being "normalized":
Why we debate, by David Altig: It's been a while since we featured one of my favorite charts—a "bubble graph" comparing average monthly job changes during this recovery with average changes during the previous recovery, sector by sector.
If you try, it isn't too hard to see in this chart a picture of a labor market that is very close to "normalized," excepting a few sectors that are experiencing longer-term structural issues. First, most sectors—that is, most of the bubbles in the chart—lie above the horizontal zero axis, meaning that they are now in positive growth territory for this recovery. Second, most sector bubbles are aligning along the 45-degree line, meaning jobs in these areas are expanding (or in the case of the information sector, contracting) at about the same pace as they were before the "Great Recession." Third, the exceptions are exactly what we would expect—employment in the construction, financial activities, and government sectors continues to fall, and the manufacturing sector (a job-shedder for quite some time) is growing slightly.
For the skeptics, I below offer a familiar chart, which traces the level of total employment pre- and post-December 2009, compared with the average path of pre- and post-recession employment for the previous five downturns:
We are now more than 16 quarters past the beginning of the recession that began in the fourth quarter of 2007, and total employment is still 4 percent lower than it was at the beginning of the downturn. In the previous five recessions by the time 16 quarters had passed, employment had increased by about 6 percent. Even in the worst case, indicated by the lower edge of the gray shaded area, employment growth was flat—and that observation is qualified by the fact that the recovery from the 1980 recession was interrupted by the 1981–82 recession.
This unhappy comparison is not driven by the construction, financial activities, and government sectors. In the area of professional and business services, which has logged the largest average monthly employment gains in the current recovery, the number of jobs still sits 2.7 percent below the level at the outset of the last recession, as the chart below shows.
Total private-sector jobs in education and health services, which never actually contracted during the recession, nonetheless remain abnormally low in historical context.
In these charts lies the crux of some very basic disagreements about the appropriate course of policy. The last three graphs draw a clear picture of labor markets that are underperforming by historical standards—a position that I take to be the conventional wisdom. An argument against following that conventional wisdom centers on the question of whether historical standards represent the appropriate yardstick today. In other words, is the correct reference point the level of employment or the pace of improvement in the labor market from a permanently lower level? For the proponents of the latter view, the bubble chart might very well look like a return to normal, despite the fact that employment has not returned to prerecession levels.
One way to adjudicate the debate, in theory, is to rely on the trajectory of inflation. If there remains a significant amount of slack in labor markets, as the conventional interpretation of things suggest, there ought to be consistent downward pressure on prices. The case for consistent downward pressure on prices is not so obvious. Measured inflation appears to moving in the direction of the Federal Open Market Committee's 2 percent long-term objective.
Also, the Atlanta Fed's own monthly survey of business inflation expectations, which surveys a panel of businesses from our Reserve Bank district, indicates that this inflation number (shown in our March release from earlier this week) is in line with what private-sector decision makers anticipate:
"Survey respondents indicated that, on average, they expect unit costs to rise 2.0 percent over the next 12 months. That number is up from 1.9 percent in February and comparable to recent year-ahead inflation forecasts of private economists. Firms also reported that their unit costs had risen 1.8 percent compared to this time last year, which is unchanged from their assessment in February. Inflation uncertainty, as measured by the average respondent's variance, declined from 2.8 percent in February to 2.4 percent in March, the lowest variance since the survey was launched in October 2011."
Does that settle it? Not quite. There may not be much evidence of building disinflationary pressure, but neither is there building evidence of an inflationary push that you would expect to see if the economy were bumping up against capacity constraints. Obviously, the story isn't over yet.
Here are some graphs from my presentation yesterday to the St. Louis Fed (the talk was trying to convince them to start a blog along the lines of what David Altig did in Cleveland, so the main theme was not the graphs below). The graphs show what happens to GDP after a financial crisis. In some cases the effects seem permanent, in others they appear temporary. What I'd like to do next is figure out if there are any systematic differences between the countries that experience permanent versus temporary effects that can be used to understand why they have such different outcomes. Is it the type of shock? The policy response? Institutional differences? And so on (source of graphs - the vertical blue line marks the start of the crisis):
US after the Great Depression
Hong Kong
Colombia
Spain
Sweden
South Korea
Philippines
Indonesia
Argentina
Japan
Norway
Thailand
Malaysia
Finland
One more note. If you had looked at this graph (from The Economist, the one on the left), you would likely conclude that the fall in GDP for Sweden is permanent:
Sweden and Korea
That looks a lot like the US right now. But if you extend the graph for a few years, the picture changes dramatically:
Mitt Romney is "helping to further a dangerous trend":
Paranoia Strikes Deeper, by Paul Krugman, Commentary, NY Times: Stop, hey, what’s that sound? Actually, it’s the noise a great political party makes when it loses what’s left of its mind. And it happened — where else? — on Fox News on Sunday, when Mitt Romney bought fully into the claim that gas prices are high thanks to an Obama administration plot.
This claim isn’t just nuts; it’s a sort of craziness triple play — a lie wrapped in an absurdity swaddled in paranoia. ...
First, the lie: No, President Obama did not say, as many Republicans now claim, that he wanted higher gasoline prices. ... The claim ... is a lie, pure and simple. And it’s a lie wrapped in an absurdity, because the president ... doesn’t control gasoline prices, or even have much influence over those prices. ...
Finally, there’s the paranoia, the belief that liberals in general, and Obama administration officials in particular, are trying to make driving unaffordable as part of a nefarious plot against the American way of life. And, no, I’m not exaggerating. This is what you hear even from thoroughly mainstream conservatives. ...
And it’s not just gas prices..., the conspiracy theories are proliferating so fast it’s hard to keep up. Thus, large numbers of Republicans — ...important political figures... — firmly believe that global warming is a gigantic hoax ... involving thousands of scientists... Meanwhile, others are attributing the recent improvement in economic news to a dastardly plot to withhold stimulus funds, releasing them just before the 2012 election. And let’s not even get into health reform.
Why is this happening? ... Naturally, people who constantly hear about the evil that liberals do are ready and willing to believe that everything bad is the result of a dastardly liberal plot. And these are the people who vote in Republican primaries. But what about the broader electorate?
If and when he wins the nomination, Mr. Romney will try, as a hapless adviser put it, to shake his Etch A Sketch — that is, to erase the record of his pandering to the crazy right and convince voters that he’s actually a moderate. And maybe he can pull it off.
But let’s hope that he can’t, because the kind of pandering he has engaged in during his quest for the nomination matters. Whatever Mr. Romney may personally believe, the fact is that by endorsing the right’s paranoid fantasies, he is helping to further a dangerous trend in America’s political life. And he should be held accountable for his actions.
I've been trying to figure out whether the Fed's declaration that it would maintain exceptionally low rates through late 2014 represents a conditional or unconditional statement. That is, if the economy improves faster than expected, will the Fed raise rates prior to that time? Or will it honor this as a firm commitment that is independent of the actual evolution of the economy?
The statement clearly leaves wiggle room -- if the Fed wants out of the commitment the language is there. But I have the impression that the public views it as a firm, unconditional commitment and if the Fed backs away it will be seem as breaking a promise (i.e. lose credibility).
Apparently, I'm not the only one who is unsure about this. This is from Jeffrey R. Campbell, Charles L. Evans, Jonas D.M. Fisher, and Alejandro Justiniano (Charles Evans is the president of the Chicago Fed). They look at the effectiveness and viability of the two types of forward guidance, and conclude that a firm commitment with an escape clause specified as a specific rule (e.g. won't raise rates until until unemployment falls below 7% or inflation expectation rise above 3%) can work well:
Macroeconomic Effects of FOMC Forward Guidance, by Jeffrey R. Campbell, Charles L. Evans, Jonas D.M. Fisher, and Alejandro Justiniano, March 14, 2012, Conference Draft: 1 Introduction Since the onset of the financial crisis, Great Recession and modest recovery, the Federal Reserve has employed new language and tools to communicate the likely nature of future monetary policy accommodation. The most prominent developments have manifested themselves in the formal statement that follows each meeting of the Federal Open Market Committee (FOMC). In December 2008 it said "the Committee anticipates that weak economic conditions are likely to warrant exceptionally low levels of the federal funds rate for some time." In March 2009, when the first round of large scale purchases of Treasury securities was announced, "extended period" replaced "some time." In the face of a modest recovery, the August 2011 FOMC statement gave specificity to "extended period" by anticipating exceptionally low rates "at least as long as mid-2013." The January 2012 FOMC statement lengthened the anticipated period of exceptionally low rates even further to "late 2014." These communications are referred to as forward guidance.
The nature of this most recent forward guidance is the subject of substantial debate. Is "late 2014" an unconditional promise to keep the funds rate at the zero lower bound (ZLB) beyond the time policy would normally involve raising the federal funds rate? ... Alternatively, is "late 2014" simply conditional guidance based upon the sluggish economic activity and low inflation expected through this period? ...
Our paper sheds light on these issues and the potential role of forward guidance in the current policy environment. Motivated by the competing interpretations of "late 2014," we distinguish between two kinds of forward guidance. Odyssean forward guidance changes private expectations by publicly committing the FOMC to future deviations from its underlying policy rule. Circumstances will tempt the FOMC to renege on these promises precisely because the policy rule describes its preferred behavior. Hence this kind of forward guidance resembles Odysseus commanding his sailors to tie him to the ship's mast so that he can enjoy the Sirens' music.
All other forward guidance is Delphic in the sense that it merely forecasts the future. Delphic forward guidance encompasses statements that describe only the economic outlook and typical monetary policy stance. Such forward guidance about the economic outlook influences expectations of future policy rates only by changing market participants views about likely outcomes of variables that enter the FOMC's policy rule. ...
The monetary policies elucidated by Krugman (1999), Eggertsson and Woodford (2003) and Werning (2012) rely on Odyssean forward guidance, and these have inspired several policy proposals for providing more accommodation at the ZLB. The more aggressive policy alternatives proposed include Evans's (2012) state-contingent price-level targeting, nominal income-targeting as advocated by Romer (2011), and conditional economic thresholds for exiting the ZLB proposed by Evans (2011). These proposals' benefits depend on the effectiveness of FOMC communications in influencing expectations. Fortunately, there exists historical precedent with which we can assess whether FOMC forward guidance has actually had an impact. The FOMC has been using forward guidance implicitly through speeches or explicitly through formal FOMC statements since at least the mid-1990s. Language of one form or another describing the expected future stance of policy has been a fixture of FOMC statement language since May 1999. The first part of this paper uses data from this period as well as from the crisis period to answer two key questions. Do markets listen? When they do listen, do they hear the oracle of Delphi forecasting the future or Odysseus binding himself to the mast?
Our examination of whether markets are listening to forward guidance builds on prior work... We find results that are similar to, if not even stronger than, those of Gurkaynak et al. (2005). That is, we confirm that during and after the crisis, FOMC statements have had significant affects on long term Treasuries and also corporate bonds and that these effects appear to be driven by forward guidance.
Studying federal funds futures rates during the day FOMC statements are released identifies forward guidance, but does not disentangle its Odyssean and Delphic components. ... To answer our second key question, we develop a framework for measuring forward guidance based on a traditional interest rate rule that identifies only Odyssean forward guidance. ... We highlight here two results. First, the FOMC telegraphs most of its deviations from the interest rate rule at least one quarter in advance. Second, the Odyssean forward guidance successfully signaled that monetary accommodation would be provided much more quickly than usual and taken back more quickly during the 2001 recession and its aftermath. Overall, our empirical work provides evidence that the public has at least some experience with Odyssean forward guidance, so the monetary policies that rely upon it should not appear entirely novel.
The second part of the present paper investigates the consequences the Odyssean forward put in place with the "late 2014" statement language. On the one hand this language resembles the policy recommendations of Eggertsson and Woodford (2003) and could be the right policy for an economy struggling to emerge from a liquidity trap. On the other hand there are legitimate concerns that this forward guidance places the FOMC's mandated price stability goal at risk. We consider the plausibility of these clashing views by forecasting the path of the economy with the present forward guidance and subjecting that forecast to two upside risks: higher inflation expectations and faster deleveraging. ...
Evans (2011) has proposed conditioning the FOMC's forward guidance on outcomes of unemployment and inflation expectations. His proposal involves the FOMC announcing specific conditions under which it will begin lifting its policy rate above zero: either unemployment falling below 7 percent or expected inflation over the medium term rising above 3 percent. We refer to this as the 7/3 threshold rule. It is designed to maintain low rates even as the economy begins expanding on its own (as prescribed by Eggertsson and Woodford (2003)), while providing safeguards against unexpected developments that may put the FOMCs price stability mandate in jeopardy. Our policy analysis suggests that such conditioning, if credible, could be helpful in limiting the inflationary consequences of a surge in aggregate demand arising from an early end to the post-crisis deleveraging.
But... It might not prove that popular with Republican voters. ...
In a March 3-6 YouGov poll, respondents were asked whether they wanted to increase spending, decrease spending, or keep spending the same in 16 different policy areas. In only one case, foreign aid, did a majority (72%) of Americans want to cut spending. And some of the most costly programs -- like Social Security, Medicare, and Medicaid -- attracted some of the least opposition.
What about when we isolate the views of the Republican base...? Consider the figure below, which compares responses among all respondents and Republican primary voters:
Only 17% of Republican primary voters wanted to cut spending on health research. Only 20% wanted to cut Social Security or Medicare. In fact, only 32% wanted to cut Medicaid and only 36% wanted to cut aid to the poor. Majorities of GOP primary voters were willing to cut only four things: unemployment benefits, spending on housing, spending on the environment, and foreign aid.
The same basic story emerges among supporters of different Republican presidential candidates [graph]. ...
Even though Ryan's budget may be dead on arrival, don't expect Republican voters to mourn its passing.
If you asked the same GOP voters about cutting the benefits of other people, benefits they believe are financed almost entirely from the taxes they pay, I expect the answer would be different.
In the week ending March 17, the advance figure for seasonally adjusted initial claims was 348,000, a decrease of 5,000 from the previous week's revised figure of 353,000. The 4-week moving average was 355,000, a decrease of 1,250 from the previous week's revised average of 356,250.
The previous week was revised up to 353,000 from 351,000. ...
The ongoing decline in initial weekly claims is good news. Even in "good times" weekly claims are usually just above 300 thousand, and claims are getting there.
Amartya Sen's commitments, by Dan Little: A recent post examined the Akerlof and Kranton formalization of identity within a rational choice framework. It is worth considering how this approach compares with Amartya Sen's arguments about "commitments" in "Rational Fools" (link).
Sen's essay is a critique of the theory of narrow economic rationality to the extent that it is thought to realistically describe real human deliberative decision-making. He chooses Edgeworth as a clear expositor of the narrow theory: "the first principle of Economics is that every agent is actuated only by self interest" (Sen 317, quoting Mathematical Psychics). Sen notes that real choices don't reflect the maximizing logic associated with rational choice theory: "Choice may reflect a compromise among a variety of considerations of which personal welfare may be just one" (324). Here he argues for the importance of "commitments" in our deliberations about reasons for action. Acting on the basis of commitment is choosing to do something that leads to an outcome that we don't subjectively prefer; it is acting in a way that reflects the fact that our actions are not solely driven by egoistic choice. "Commitments" are other-regarding considerations that come into the choices that individuals make.
Sen distinguishes between sympathy and commitment:
The former corresponds to the case in which the concern for others directly affects one's own welfare. If the knowledge of torture of others makes you sick, it is a case of sympathy; if it does not make you feel personally worse off, but you think it is wrong and you are ready to do something to stop it, it is a case of commitment. (326)
The characteristic of commitment with which I am most concerned here is the fact that it drives a wedge between personal choice and personal welfare, and much of traditional economic theory relies on the identity of the two. (329)
Sen thinks that John Harsanyi made an advance on the narrow conception of rationality by introducing discussion of two separate preference orderings that are motivational for real decision-makers: ethical preferences and subjective preferences. (This is in "Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility".) But Sen rightly points out that this construction doesn't give us a basis for choosing when the two orderings dictate incompatible choices. Sen attempts to formalize the idea of a commitment as a second-order preference ordering: a ranking of rankings. "We need to consider rankings of preference rankings to express our moral judgments" (337).
Can one preference ordering do all these things? A person thus described may be "rational" in the limited sense of revealing no inconsistencies in his choice behavior, but if he has no use for these distinctions between quite different concepts, he must be a bit of a fool. The purely economic man is indeed close to being a social moron. Economic theory has been much preoccupied with this rational fool decked in the glory of his one all-purpose preference ordering. To make room for the different concepts related to his behavior we need a more elaborate structure. (335-336)
Here is an example. "I wish I liked vegetarian foods more" is an example of a second-order preference ranking: it indicates a rational preference for the first-order ranking in which the vegetarian option comes ahead of the lamb option over the ranking in which these options are reversed. And Sen's point is an important one: the second-order ranking can be behaviorally influential. I may choose the vegetarian option, not because I prefer it, but because I prefer the world arrangement in which I go for the vegetarian option. Or in other words, one's principles or commitments may trump one's first-order preferences.
Significantly, Sen's thinking on this subject was developed in part through a conference organized by Stephen Körner on practical reason in the 1970s (Practical Reason: Papers and Discussions). This is significant because it focuses attention on a very basic fact: we don't yet have good theories of how a variety of considerations -- ethical principles, personal identities, feelings of solidarity, reasoning about fairness, and self-interest -- get aggregated into decisions in particular choice circumstances.
Other economists might object to this formulation on the basis of the fact that second-order preference rankings are more difficult to model; so we don't get clean, simple mathematical representations of behavior if we introduce this complication. Sen acknowledges this point:
Admitting behavior based on commitment would, of course have far- reaching consequences on the nature of many economic models. I have tried to show why this change is necessary and why the consequences may well be serious. Many issues remain unresolved, including the empirical importance of commitment as a part of behavior, which would vary, as I have argued, from field to field. I have also indicated why the empirical evidence for this cannot be sought in the mere observation of actual choices, and must involve other sources of information, including introspection and discussion. (341-342)
But his reply is convincing. There are substantial parts of ordinary human activity that don't make sense if we think of rationality as egoistic maximization of utility. Collective action, group mobilization, religious sacrifice, telling the truth, and working to the fullest extent of one's capabilities are all examples of activity where narrow egoistic rationality would dictate different choices than those ordinary individuals are observed to make. And yet ordinary individuals are not irrational when they behave this way. Rather, they are reflective and deliberative, and they have reasons for their actions. So the theory of rationality needs to have a way of representing this non-egoistic reasonableness. This isn't the only way that moral and normative commitments can be incorporated into a theory of rational deliberation; but it is one substantive attempt to do so, and is more satisfactory (for me, anyway) than the construction offered by Akerlof and Kranton.
(I also like the neo-Kantian approach taken by Tom Nagel in The Possibility of Altruism as an effort to demonstrate that non-egoistic reasoning is rational.)
“Stimulus was supposed to be quick. In fact, they never intended to spend it and will not completely have effectively spent it until after the president’s re-elect. Always looking at how do you get the maximum hit when the president was up for re-elect.” — Rep. Darrell Issa (R-Calif.), chairman of the House Oversight and Government Reform Committee, March 19, 2012
This is a pretty serious charge by a senior member of the House of Representatives, made on “Fox and Friends” earlier this week. ... We immediately thought he must have some damning evidence that his investigators had turned up. But when we asked for more information,... his staff could not provide much...
Did the law purposely hold back funds so there could be a “maximum hit” during the president’s re-election campaign? We could not find any evidence to support this claim. ...
I find it amusing that Issa is asserting that that the stimulus worked, and worked so well that it can be used to strategically manipulate the election.
Given that he believes stimulus can be that effective, can we count on his vote if another round of stimulus comes up, or will he put politics ahead of people and vote no? We know how he voted on the ASSA:
I joined 187 of my colleagues in voting against this bill on January 28, 2009. The $825 billion economic stimulus package proposed by President Barack Obama, House Speaker Pelosi and Senate Majority Leader Reid is unlikely to be a catalyst for economic growth. ...
Since he thinks stimulus works to help people, and he voted against it, it seems he is the one putting politics first. But as you might have guessed, he doesn't really think the stimulus package worked:
Instead of injecting new life into the economy, the bill will support a massive growth of government ... and miss an opportunity to reinvigorate the economy
Is that what he means by getting a "maximum hit when the president was up for re-elect," that "massive growth of government" is the key to reelection? More likely he was blowing it out of his, uhm, ears.
Another reason to attack the unemployment problem aggressively -- unemployment causes foreclosues:
The Changing Face of Foreclosures, by Joshua Abel and Joseph Tracy, Liberty Street: The foreclosure crisis in America continues to grow, with more than 3 million homes foreclosed since 2008 and another 2 million in the process of foreclosure. President Obama, in his speech of February 2, 2012, argued for expanded refinancing opportunities for homeowners and programs to expedite the transition of foreclosed homes into rental housing. In this post, we document the changing face of foreclosures since 2006 and the transformation of the crisis from a subprime mortgage problem to a prime mortgage problem owing to the housing bust and persistent high unemployment. Recognizing this change is critical because the design of housing policies should reflect the types of homeowners who are at risk of foreclosure today rather than those who were at risk at the onset of the financial crisis.
It is well known that problems with nonprime lending helped to spark the housing crisis, which was a catalyst for the financial crisis and ensuing recession. Also well known is the progressive erosion of underwriting standards in nonprime lending toward the end of the housing boom. As a result, many nonprime loans were made to borrowers who did not have the ability to pay for them, especially if house prices did not continue to increase. Not surprisingly, then, as house prices began to flatten and decline in 2006, foreclosure starts were dominated by nonprime borrowers. As shown in our first chart, nonprime borrowers accounted for about 65 percent of foreclosure starts in 2006. However, as the financial crisis led to the Great Recession (indicated in grey), the composition of borrowers entering foreclosure shifted quite dramatically. By 2009, prime borrowers had eclipsed nonprime borrowers as the dominant source of new foreclosures. In fact, from 2009 until the present, prime borrowers have accounted for the majority of all new foreclosure starts. A fairly steady 10 percent of foreclosure starts were associated with mortgages guaranteed by the Federal Housing Administration or the Department of Veterans Affairs.
What accounts for the dramatic change in the composition of foreclosure starts since 2006? Our next chart shows two important economic factors that have affected homeowners over this period—house prices and unemployment. For each mortgage that enters foreclosure, we calculate the percentage change in metropolitan area house prices from the time that the mortgage was originated to the time it entered foreclosure. We report average changes across all new foreclosures by year and quarter. From 2006 through 2008, as the share of new foreclosures was shifting from nonprime to prime borrowers, we see that initially foreclosures involved properties that were on average still increasing in value (as measured by the positive cumulative change in the metro area house price index defined above), but then shifted to properties with declining house prices (in 2008), and eventually to properties where on average house prices had declined by 20 percent (in 2009). In fact, since 2009, properties entering foreclosure have continued to face a 20 percent decline in value on average.
We also calculate the change in the local (defined as the metropolitan area) unemployment rate. Just as foreclosure starts were initially associated with properties whose value was still rising, so foreclosures in 2006 and 2007 were linked to local labor markets where the unemployment rate was still declining. In 2008, however, foreclosures shifted to markets where unemployment was beginning to rise and, in 2009, to markets where unemployment had increased on average by more than 2 percentage points. In 2010, foreclosure starts occurred in markets where the increase in the local unemployment rate exceeded 5 percentage points on average since the mortgages were originated. The shift in the composition of new foreclosures from borrowers with nonprime mortgages to those with prime mortgages reflects the fact that falling house prices and rising unemployment tend to impact all borrowers in a local housing market, not just nonprime borrowers. As a result, traditionally safe borrowers began falling behind on their payments as they felt the severe effects of the housing bust and high unemployment.
In the design of housing policy, an important consideration is the extent to which foreclosures result from situations where borrowers cannot afford their mortgage from the outset. In these circumstances, foreclosures can be viewed as the market process for removing borrowers who should not have been approved for a mortgage in the first place or who cannot sustain their mortgage going forward. When affordability is the key determinant of foreclosures, policies aimed at reducing the flow into foreclosure run the risk of slowing an adjustment process necessary for an eventual housing market recovery. A useful metric for the ability of a borrower to afford a mortgage is the “debt-to-income” (DTI) ratio. This measures the cost of the mortgage (monthly payments, property taxes, and homeowner’s insurance) relative to the borrower’s income. Unfortunately, because the data that we use from Lender Processing Services do not consistently report the DTI ratio, we cannot assess this affordability measure across time for foreclosure starts.
However, we provide an alternative indirect measure of affordability. The basic idea is that in cases where a borrower cannot afford a mortgage from the outset, payment problems are likely to materialize sooner rather than later. In the chart below, we look at the time between the origination of a mortgage and the beginning of the string of missed payments that ultimately led to foreclosure. We show the 25th percentile (25 percent of the times were shorter, P25), the median (50 percent of the times were shorter, P50), and the 75th percentile (75 percent of the times were shorter, P75). Initially, when most foreclosure starts were associated with nonprime mortgages, 25 percent of the borrowers had been in the house fewer than eight months before falling behind on their payments, and 50 percent fewer than eighteen months. However, more recently, as the composition of foreclosures shifted to prime borrowers, 75 percent had been in the house more than three years, and 50 percent more than four years. This suggests that as the recession hit, foreclosures shifted from borrowers who often could not afford their houses to borrowers who had demonstrated that they could (by virtue of making payment for several years) but began to fall behind on their payments when they were hit by the dual crises of house price declines and high unemployment.
This change in the face of foreclosures is mirrored in many other dimensions. Our last chart shows the evolution in the distribution of origination credit (FICO) scores over time for new foreclosures. In 2006, 25 percent of foreclosure starts were associated with borrowers who had a credit score of 580 or below at the time they took out the mortgage, and 50 percent had credit scores of 620 or below. However, by 2009, as the recession set in and shifted the mix of foreclosures to prime borrowers, 50 percent of new foreclosures had origination credit scores of nearly 680, and 25 percent had credit scores of 720 or higher.
Nonprime lending during the housing boom was concentrated in what were called “exotic” mortgages with little down payment, initial “teaser” rates and, in some cases, negative amortization. However, since 2010, 65 percent of foreclosure starts have been associated with borrowers who took out thirty-year fixed-rate amortizing mortgages (viewed by consumer advocates as the “safest” mortgage product)—up from 40 percent early in the crisis. Similarly, the prime borrowers who have entered foreclosure in the past several years have on average made a meaningful down payment of 20 percent.
A large foreclosure pipeline hangs over U.S. housing markets, creating headwinds for housing market recovery. What began as a nonprime mortgage problem has evolved into a prime mortgage problem with the onset of the recession. The inability to afford a home has been replaced by declining house prices and high unemployment as the primary driver of new foreclosures. Clearly, these changes have implications for the design of housing policy: By recognizing the shifting face of foreclosures, policymakers can make more informed choices about the most effective forms of intervention and the groups of borrowers that could best be served by them.