« August 2016 |
| October 2016 »
Arjun Jayadev at INET:
Do U.S. Economists Ignore Inequality?: A thought-provoking report in The Atlantic seeks to explore an apparent paradox in the practice of economics in the United States: Despite the high levels of inequality that many view as a drag on the performance of the U.S. economy — and also the increasingly volatile political effects of that inequality — the report argues that American economists have not been at the forefront of studying the distribution of wealth and income. Most of the cutting edge work has been often led by researchers from different national and cultural backgrounds.
While it’s encouraging to note that many of those cited in the piece as exemplary path-breakers in the field, such Gabriel Zucman, Steven Fazzari, and James K. Galbraith, have been supported by the Institute for New Economic Thinking, it may be overstating the case to say that American economists have not been cognizant of inequality — the widening of the wealth and income gap over the past 30 years has hardly gone unnoticed. Still, it is probably true to suggest that most explanations offered for the phenomenon have cited some combination of differential investments in human capital and/or technological changes as being primary drivers. There have always, of course, been dissenters from this approach, but analyses that identified macroeconomic factors such as weak labor markets or political factors such as the rise of a financial class remained minority views.
What is notable about some of the new research commended by the Atlantic for more boldly tackling the more politically challenging aspects of inequality is that this work has given more central causal weight to macroeconomic policy and factors such as the declining bargaining power of labor vs. capital.
For many years, to cite one example, labor shares and their decline were either treated as an artifact of the way data is collected, or as marginal to economic analysis. In the recent past however, a spate of influential papers have returned to the question of the capital-labor relations and shares of output going to each. Other influential papers have implicated the enormous growth of the financial sector as important features in the landscape of U.S. inequality. The centrality of the balance of political power in shaping income distribution was certainly a feature of U.S. academic economics until the 1980s, and work in this tradition has continued at the margins. But the new research being hailed by the Atlantic is restoring the centrality of such concerns. Economics, and the society it purports to serve, can only benefit from that development.
Posted by Mark Thoma on Thursday, September 15, 2016 at 12:24 AM in Economics, Income Distribution |
Posted by Mark Thoma on Thursday, September 15, 2016 at 12:06 AM in Economics, Links |
Danilo Trisi at the CBPP:
Safety Net Cut Poverty Nearly in Half Last Year: Safety net programs cut the poverty rate nearly in half in 2015, lifting 38 million people — including 8 million children — above the poverty line, our analysis of Census data released yesterday finds. The Census data show the impact of a broad range of government assistance, such as Social Security, SNAP (formerly food stamps), Supplemental Security Income, rent subsidies, and tax credits for working families like the Earned Income Tax Credit (EITC) and Child Tax Credit. The figures rebut claims that government programs do little to reduce poverty.
Government benefits and taxes cut the poverty rate from 26.3 percent to 14.3 percent in 2015. Among children, they cut the poverty rate from 26.8 percent to 16.1 percent... This analysis uses the Census Bureau’s Supplemental Poverty Measure (SPM), which counts various government non-cash benefits as income, as most analysts favor. ...
These figures understate the safety net’s effectiveness because they don’t correct for households’ underreporting of government... In 2012, the most recent year for which we have data corrected for such underreporting, the safety net lowered the SPM poverty rate from 29.1 percent to 13.8 percent — a poverty rate 2.2 percentage points lower than in SPM data without these corrections.
Policymakers negotiating budget and tax priorities should keep in mind that the safety net keeps millions of children, adults, and seniors of all races and ethnicities from falling below the poverty line. Deep cuts to these programs would make them much less effective at reducing poverty and would push the U.S. poverty rate substantially higher.
Posted by Mark Thoma on Wednesday, September 14, 2016 at 12:01 PM in Economics, Social Insurance |
Chris Sagers at ProMarket:
American Antitrust Is Having a Moment: Some Reactions to Commissioner Ohlhausen’s Recent Views: Over the summer, Federal Trade Commissioner Maureen Ohlhausen took me and several others to task in a speech, subsequently published as a journal article... The theme we’d all written about is whether we in the United States have a “monopoly problem,” and whether federal policy should try to do something about it. ...
Commissioner Ohlhausen had some pretty strong words. ... Specifically, she implies a very strong presumption against public interference in private markets, as indicated by her argument that there is not yet sufficient evidence that we have a monopoly problem. The argument seems to be that we must wait until we are very, very sure, beyond any reasonable econometric doubt, apparently, that there’s something wrong before we step in. ...
She is mistaken, and she ignores roughly a library-full of well-known..., sophisticated empirical work. ...
In the end, the irony of these remarks is captured in this point: Commissioner Ohlhausen is pretty witheringly dismissive of a certain kind of evidence of market power, and implies that it would not support increased enforcement unless it can overcome a high methodological bar. But for her own countervailing evidence that in fact American markets are “fierce[ly] competiti[ve],” she says this: “Consider the new economy, which is a hotbed of technological innovation. That environment does not strike me as one lacking competition.”
In other words, the presumption against antitrust is so strong that evidence of harm must meet the most exacting standards of social science. To prove that markets are in fact competitive, however, needs nothing more than seat-of-the-pants anecdotes. Again, I mean no disrespect, and I think we have an honest difference of opinion. But this stance is not social science, and it is not good, empirically founded public policy. It is just ideology. ...
It’s definitely true that the agencies have brought a bunch of challenges to a bunch of nasty mergers, and perhaps total enforcement numbers have gone up a bit. But that is because we are in the midst of a merger wave in which parties have been proposing breathtakingly massive, overwhelmingly consolidating horizontal deals. While there is a track record to be proud of in the administration’s enforcement, especially, as the commissioner observes, in the Commission’s campaign against hospital mergers, reverse-payment deals, SEP problems, and patent trolls, and who knows how many other matters, the fact remains that by and large the administration has mostly not taken action that any administration would not have taken, including the Reagan and both Bush administrations. ...
Posted by Mark Thoma on Wednesday, September 14, 2016 at 10:44 AM in Economics, Market Failure, Regulation |
Posted by Mark Thoma on Wednesday, September 14, 2016 at 12:06 AM in Economics, Links |
The beginning of a relatively long discussion by Ben Bernanke:
Modifying the Fed’s policy framework: Does a higher inflation target beat negative interest rates?: Nominal interest rates are very low, and in a world of excess global saving, low inflation, and high demand for safe assets like government debt, there’s a good chance that they will be low for a long time. That fact poses a potential problem for the Federal Reserve and other central banks: When the next recession arrives, there may be limited room for the interest-rate cuts that have traditionally been central banks’ primary tool for sustaining employment and keeping inflation near target.
That concerning possibility has led to calls for a new monetary policy framework, including by Fed insiders like John Williams, president of the San Francisco Fed. In particular, Williams has joined Olivier Blanchard and other prominent economists in proposing that the Fed consider raising its target for inflation, currently 2 percent. If the Fed targeted a higher average level of inflation, the reasoning goes, nominal interest rates would also tend to be higher, leaving more room for rate cuts when needed.
Interestingly, some advocates of a higher inflation target have been dismissive of the use of negative short-term interest rates, an alternative means of increasing “space” for monetary easing. For example, in a recent interview in which he advocated reconsideration of the Fed’s inflation target, Williams said: “Negative rates are still at the bottom of the stack in terms of net effectiveness.” Williams’s colleague on the Federal Open Market Committee, Eric Rosengren, also has suggested that the Fed may need to set higher inflation targets in the future while asserting that negative rates should be viewed as a last resort. My sense is that Williams’s and Rosengren’s negative view of negative rates is broadly shared on the FOMC. Outside the United States, Mark Carney, governor of the Bank of England, has expressed openness to targeting nominal GDP (which essentially involves targeting a higher inflation rate when GDP growth is low), but has also made clear that he is “not a fan” of negative interest rates.
As I explain below, negative rates and higher inflation targets can be viewed as alternative methods for pushing the real interest rate further below zero. In that context, I am puzzled by the apparently strong preference for a higher inflation target over negative rates, at least based on what we know now. ...
Posted by Mark Thoma on Tuesday, September 13, 2016 at 03:03 PM in Economics, Monetary Policy |
Posted by Mark Thoma on Tuesday, September 13, 2016 at 12:06 AM in Economics, Links |
Why are so many Republicans members of "the Putin cult"?:
Thugs and Kisses, by Paul Krugman, NY Times: ...Donald Trump’s effusive praise for Vladimir Putin — which actually reflects a fairly common sentiment on the right — seems to have confused some people..., today’s Russia isn’t Communist, or even leftist; it’s just an authoritarian state, with a cult of personality around its strongman, that showers benefits on an immensely wealthy oligarchy while brutally suppressing opposition and criticism.
And that, of course, is what many on the right admire.
Am I being unfair? Could praise for Russia’s de facto dictator reflect appreciation of his substantive achievements? Well, let’s talk about what the Putin regime has, in fact, accomplished...
Mr. Putin came to power at the end of 1999... Fuels account for more than two-thirds of its exports, manufactures barely a fifth. And oil prices more than tripled between early 1999 and 2000; a few years later they more than tripled again. Then they plunged, and so did the Russian economy, which has done very badly in the past few years.
Mr. Putin would actually have something to boast about if he had managed to diversify Russia’s exports. And this should have been possible: ... But Russia wasn’t going to realize its technology potential under a regime where business success depends mainly on political connections.
So Mr. Putin’s economic management is nothing to write home about. ...
Which brings us back to the significance of the Putin cult, and the way this cult has been eagerly joined by the Republican nominee for president.
There are good reasons to worry about Mr. Trump’s personal connections to the Putin regime (or to oligarchs close to that regime, which is effectively the same thing.) How crucial has Russian money been in sustaining Mr. Trump’s ramshackle business empire? There are hints that it may have been very important indeed, but given Mr. Trump’s secretiveness and his refusal to release his taxes, nobody really knows.
Beyond that, however, admiring Mr. Putin means admiring someone who has contempt for democracy and civil liberties. Or more accurately, it means admiring someone precisely because of that contempt.
When Mr. Trump and others praise Mr. Putin as a “strong leader,” they don’t mean that he has made Russia great again, because he hasn’t. He has accomplished little on the economic front, and his conquests, such as they are, are fairly pitiful. What he has done, however, is crush his domestic rivals: Oppose the Putin regime, and you’re likely to end up imprisoned or dead. Strong!
Posted by Mark Thoma on Monday, September 12, 2016 at 02:50 AM in Economics, Politics |
Posted by Mark Thoma on Monday, September 12, 2016 at 12:06 AM in Economics, Links |
Building the case for greater infrastructure investment: There is a consensus that the US should substantially raise its level of infrastructure investment. Economists and politicians of all persuasions recognize that this can create quality jobs and provide economic stimulus without posing the risks of easy-money policies in the short run. They also see that such investment can expand the economy’s capacity in the medium term and mitigate the huge maintenance burden we would otherwise pass on to the next generation.
The case for infrastructure investment has been strong for a long time, but it gets stronger with each passing year...
The issue now is not whether the US should invest more in infrastructure but what the policy framework should be. There are five key questions. ...
[Note: If you can't get to FT articles, just copy the title and search for it in Google. Clicking on the link should get you past the paywall. Same for the WSJ.]
Posted by Mark Thoma on Sunday, September 11, 2016 at 09:56 AM in Economics |
Posted by Mark Thoma on Sunday, September 11, 2016 at 12:06 AM in Economics, Links |
I have a new column:
Trump’s Taco Truck Fear Campaign Diverts Attention From the Real Issues: Donald Trump would like you to believe that immigration is largely responsible for the difficult economic conditions the working class has experienced in recent decades. But immigration is not the problem. The real culprits are globalization, technological change, and labor’s dwindling bargaining power in wage negotiations.
Let’s start with immigration. ...
Posted by Mark Thoma on Friday, September 9, 2016 at 08:16 PM in Economics, Fiscal Times, Politics |
"Why is it apparently so hard to hold Mr. Trump accountable for blatant, in-your-face lies?":
Donald Trump’s ‘Big Liar’ Technique, by Paul Krugman, NY Times: ...Donald Trump has come up with something new, which we can call the “big liar” technique. Taken one at a time, his lies are medium-size — not trivial, but mostly not rising to the level of blood libel. But the lies are constant, coming in a steady torrent, and are never acknowledged, simply repeated. He evidently believes that this strategy will keep the news media flummoxed, unable to believe, or at least say openly, that the candidate of a major party lies that much.
Mr. Trump ... is in a class of his own. He lies about statistics like the unemployment rate and the crime rate. He lies about foreign policy: President Obama is “the founder of ISIS.” But most of all, he lies about himself — and when the lies are exposed, he just keeps repeating them. ...
Why is it apparently so hard to hold Mr. Trump accountable for blatant, in-your-face lies? Part of the answer may be that journalists are overwhelmed by the sheer volume of outrageous material. After all, which Trump line should be the headliner for a news analysis of Wednesday’s event? His Iraq lie? His praise for Vladimir Putin, who “has an 82 percent approval rating”? His denigration of the American military, whose commanders, he says, have been “reduced to rubble”?
There’s also a deep diffidence about pointing out uncomfortable truths. Back in 2000, when I was first writing this column, I was discouraged from using the word “lie” about George W. Bush’s dishonest policy claims. As I recall, I was told that it was inappropriate to be that blunt about the candidate of one of our two major political parties. And something similar may be going on even now, with few people in the media willing to accept the reality that the G.O.P. has nominated someone whose lies are so blatant and frequent that they amount to sociopathy.
Even that observation, however, doesn’t explain the asymmetry, because some of the same media organizations that apparently find it impossible to point out Mr. Trump’s raw, consequential lies have no problem harassing Mrs. Clinton endlessly over minor misstatements and exaggerations, or sometimes over actions that were perfectly innocent. Is it sexism? I really don’t know, but it’s shocking to watch.
And meanwhile, if the question is whether Mr. Trump can really get away with his big liar routine, the evidence from Wednesday night suggests a disheartening answer: Unless something changes, yes he can.
Posted by Mark Thoma on Friday, September 9, 2016 at 07:47 AM in Politics |
Traveling -- internet connection issues -- hopefully they are finally fixed.
Posted by Mark Thoma on Friday, September 9, 2016 at 07:16 AM in Economics, Links |
Posted by Mark Thoma on Thursday, September 8, 2016 at 05:40 PM in Economics, Links |
Douglas Campbell and Lester Lusher at VoxEU:
Drivers of inequality: Trade shocks versus top marginal tax rates: Growing wealth inequality has been one of the most pressing political issues since the Great Recession. However, there is a relative lack of consensus on the significant drivers of this trend. This column investigates the contribution of globalization, via international trade, to US wealth inequality. Although trade is found to have had important effects on certain parts of the US labor market in the early 2000s, the growth in US inequality since 1980 can be traced back to Reagan-era tax cuts. ...
Posted by Mark Thoma on Thursday, September 8, 2016 at 02:50 AM in Economics, Income Distribution, International Trade, Taxes |
Is Pushing Unemployment Lower A Risky Strategy?, by Tim Duy: The unemployment is closing in on the Fed's estimate of the natural rate of unemployment:
Consequently, Fed hawks are pushing for a rate hike sooner than later in an effort to prevent the economy from "overhearing." This overheating is argued to set the stage for the next recession. For instance, see San Francisco Federal Reserve President John Williams:
History teaches us that an economy that runs too hot for too long can generate imbalances, potentially leading to excessive inflation, asset market bubbles, and ultimately economic correction and recession. A gradual process of raising rates reduces the risks of such an outcome. It also allows a smoother, more calibrated process of normalization that gives us space to adjust our responses to any surprise changes in economic conditions. If we wait too long to remove monetary accommodation, we hazard allowing imbalances to grow, requiring us to play catch-up, and not leaving much room to maneuver. Not to mention, a sudden reversal of policy could be disruptive and slow the economy in unintended ways.
In his Bloomberg View column, former Minneapolis Federal Reserve President Narayana Kocherlakota questions whether there is much theory behind this contention:
Some Fed officials worry that “overheating” could trigger a recession. (I don’t understand the precise economic mechanism, but let’s leave that aside.)
Kocherlakota was specifically referring to the risks of undershooting the natural rate of unemployment. New York Federal Reserve President William Dudley summarized his perception of that risk in January of this year:
A particular risk of late and fast is that the unemployment rate could significantly undershoot the level consistent with price stability. If this occurred, then inflation would likely rise above our objective. At that point, history shows it is very difficult to push the unemployment rate back up just a little bit in order to contain inflation pressures. Looking at the post-war period, whenever the unemployment rate has increased by more than 0.3 to 0.4 percentage points, the economy has always ended up in a full-blown recession with the unemployment rate rising by at least 1.9 percentage points. This is an outcome to avoid, especially given that in an economic downturn the last to be hired are often the first to be fired. The goal is the maximum sustainable level of employment—in other words, the most job opportunities for the most people over the long run.
I don't know that there is an economic mechanism at work here. I don't know that there is a law of economics where the unemployment can never be nudged up a few fractions of a percentage point. But I do think there is a policy mechanism at play. During the mature and late phase of the business cycle, the Fed tends to overemphasize the importance of lagging data such as inflation and wages and discount the lags in their own policy process. Essentially, the Fed ignores the warning signs of recession, ultimately over tightening time and time again.
For instance, an inverted yield curve traditionally indicates substantially tight monetary conditions. Yet even after the yield curve inverted at the end of January 2000, the Fed continued tightening through May of that year, adding an additional 100bp to the fed funds rate. The yield curve began to invert in January of 2006; the Fed added another 100bp of tightening in the first half of that year.
This isn't an economic mechanism at work. This is a policy error at work.
Kocherlakota offers another important point:
It's easy to imagine, though, that many people would be willing to trade the risk of recessionary pain in 2019 and 2020 for the near-term gain of 2017 and 2018. They might even believe there's some chance that policy 2 will generate an outstanding outcome -- if, for example, the long-run unemployment rate is actually lower than the Fed thinks it is.
The Fed seems to place almost zero weight on the probability that the natural rate of unemployment is significantly below their estimates. In their view, only bad things happen when the unemployment rate drifts much below 5%.
Bottom Line: The Fed thinks the costs of undershooting their estimate of the natural rate of unemployment outweigh the benefits. I am skeptical they are doing the calculus right on this one. I would be more convinced they had it right if I sensed that placed greater weight on the possibility that they are too pessimistic about the natural rate. I would be more convinced if they were already at their inflation target. And I would be more convinced if their analysis of why tightening cycles end in recessions was a bit more introspective. Was it destiny or repeated policy error? But none of these things seem to be true.
Posted by Mark Thoma on Wednesday, September 7, 2016 at 12:47 PM in Economics, Fed Watch, Monetary Policy |
Posted by Mark Thoma on Wednesday, September 7, 2016 at 12:06 PM in Economics, Links |
The Fed’s complacency about its current toolbox is unwarranted: As I argued in the first blog in this series last week, I was disappointed in what came out of Jackson Hole for three reasons. The first reason, developed in that blog, was that the Fed should have signaled a desire to exceed its two percent inflation target during periods of protracted recovery and low unemployment and in this context to signal that a rate increase was off the table for September and quite likely the rest of the year. Friday’s employment report further strengthens the case for delay both by adding to the evidence on the absence of inflation pressures and by suggesting a less robust economy than most expected.
Even apart from the desirability of allowing inflation to rise above two percent in a happy economic scenario GDP, labor market and inflation expectations data all make a compelling case against a rate increase. Private sector GDP growth for the last year has averaged 1.3 percent a level that has since the 1960s always presaged recession. Total work hours have over the last 6 months grown at nearly their slowest rate since early 2010. And both market and survey measures of inflation expectations continue to decline.
My second reason for disappointment in Jackson Hole was that Chair Yellen, while very thoughtful and analytic, was too complacent to conclude that “even if average interest rates remain lower than in the past, I believe that monetary policy will, under most conditions, be able to respond effectively”. This statement may rank with Ben Bernanke’s unfortunate observation that subprime problems would be easily contained.
Rather I believe that countering the next recession is the major monetary policy challenge before the Fed. I have argued repeatedly that (i) it is more than 50 percent likely that we will have a recession in the next 3 years. (ii) countering recessions requires 400 or 500 basis points of monetary easing. (iii) we are very unlikely to have anything like that much room for easing when the next recession comes. ... [explains in detail] ...
On balance, I think the Fed’s complacency about its current toolbox is unwarranted. If I am wrong in either exaggerating the risks of recession or understating the efficacy of policy, the costs of taking out insurance against a recession that cannot be met with monetary policy are relatively low. If I my fears are justified, the costs of complacency could be very high. The right policy in the near term should be tilting as hard as possible against recession as argued in the first blog in this series. For the longer term the Fed will have to reconsider its broad policy approach. This will be subject of my next entry.
Posted by Mark Thoma on Tuesday, September 6, 2016 at 11:07 AM in Economics, Monetary Policy |
Rate Hike Hopes Fading Fast, by Tim Duy: The next FOMC meeting is just two weeks away. Fed hawks had hoped that this was their moment in the sun. I suspect they will need to wait another three months before their next opportunity to act. Signs of a second half rebound are likely too tentative for the doves to tolerate a rate hike. I don't think they will roll over as easily as they did last December.
The August employment report was not terrible. Not by any measure. On the positive side, labor supply is reacting to both demographic changes and stronger demand:
The demographic shift - essentially, the aging of the Millennials toward their prime age working years - is I believe a powerful secular force supporting the economy. That said, the Fed needs to ensure cyclical forces do not undermine the economy. And that is where the story becomes tricky. Is the economy slowing sufficiently on its own that the Fed should refrain from rate hikes? Or is the slowing still insufficient to quell the inflationary pressures Fed hawks in particular believe to be building?
On first take, the slowing in payroll growth is modest:
And arguably sufficient to place additional downward pressure on the unemployment rate. Cleveland Federal Reserve President Loretta Mester recently repeated this view, which is widely held within the FOMC. Via Reuters:
Mester, a voting member on the Fed's policy-setting committee, had earlier in the day told a philanthropy conference that the U.S. economy probably needs to generate between 75,000 and 150,000 jobs per month to keep the jobless rate stable.
Hiring has been stronger than that this year and the U.S. jobless rate is currently at 4.9 percent.
"The economy is basically at full employment," Mester said.
This "full employment" view is also evident in the Fed's estimate of the natural rate of unemployment:
This, not inflation directly, seems to be driving Fed hawks toward a rate hike. See former Federal Reserve President Narayana Kocherlakota here. It is the perceived threat of inflation, not the actual, realized threat of inflation.
Fed hawks will also point toward wage growth as evidence of tighter labor markets that foreshadows inflationary pressures:
Fed doves, however, will not be without their own interpretation of the data. The flattening of the unemployment rate could indicate supply side pessimism on the part of the hawks. That is the positive story that still fits with a no hike scenario. A more negative story is that the flat unemployment rate is consistent with late cycle patterns:
Similarly, progress toward reducing underemployment has stalled noticeably, leaving underemployment at very high levels:
Perhaps the household data is picking up a degree of slowing not yet evident in the establishment data? And on the establishment side, temporary help services payrolls are holding in a late cycle pattern as well:
As far as wages are concerned, Fed doves will say that wage growth is still anemic in comparison with past cycles and - they should add - that wages are a lagging indicator. The Fed should be paying much more attention to forward indicators. And those forward indicators remain tentative at best. The hawks' basic case is not just that the economy is at full employment, but that a second half rebound will send it beyond full employment. And while consumer spending supports the second half rebound story:
the ISM reports draw that into question. Today's service sector report was particularly disconcerting with weakness across the board - the sharp drop in new orders should give FOMC members reason for caution. Doves will thus say the Fed can't count their chickens before they hatch. And this is especially important given that the Fed continues to miss its inflation target, and a misstep at this juncture with overly tight policy will basically guarantee they miss it for the next five years as well.
Indeed, while Fed hawks such as Vice Chair Stanley Fischer and Boston Federal Reserve President Eric Rosengren see progress toward the inflation goals, Peter Olson and David Wessel, writing in the WSJ, conclude:
The inflation rate is higher now than it was in 2015. But over the course of 2016 we’ve seen no apparent progress toward the 2% inflation target. If anything, the inflation rate in January was closer to the Fed’s goal than in July. So it’s increasingly difficult for Fed officials to rely on current inflation numbers as a justification for raising rates. Higher inflation might be just around the corner, but we haven’t seen it yet.
I agree. The "progress" that Fischer and Rosengren point to occurred early in the year, mostly in January. Recent trends have been less promising.
The hawks "inflation is here" story is not particularly compelling. Indeed, I would say it borders on disingenuous. Moreover, I suspect the inflation numbers will prompt strong opposition to a rate hike this month. Recall from the recent minutes:
A couple of members preferred also to wait for more evidence that inflation would rise to 2 percent on a sustained basis.
I suspect these two members were Governors Lael Brainard and Daniel Tarrullo. My guess is that neither will roll over on a rate hike as they did last December; I think they probably question the wisdom of the outcome of that meeting. Furthermore, I think they pull Governor Powell and ultimately New York Fed President William Dudley to their side. St. Louis Federal Reserve President James Bullard is ambivalent about when the next 25bp hike occurs; in his framework, the Fed is already within spitting distance of the correct policy stance. He won't push for a hike. And I suspect that Chair Janet Yellen will thus ultimately see too little consensus to support a rate hike.
Bottom Line: Despite being near the consensus view of full employment, incoming data on the second half remains too tentative to support a rate hike this month. This is especially the case given lost momentum in the labor market, particularly with regards to underemployment, and the weak inflation numbers. Hence I do not anticipate a rate hike in September. Why might I be wrong? Aside from just being wrong on the Fed's likely interpretation of the incoming data, perhaps because I have underestimated the Fed's perception that the risks are not really asymmetric - that they have all the tools they need to fight the next recession even if they are at the zero bound - or that the Fed views financial stability concerns as trumping the inflation outlook.
Posted by Mark Thoma on Tuesday, September 6, 2016 at 09:46 AM in Economics, Fed Watch, Monetary Policy |
Posted by Mark Thoma on Tuesday, September 6, 2016 at 12:06 AM in Economics, Links |
Sociologists on economics. This is by Daniel Little:
Capitalism as a heterogeneous set of practices: A key part of understanding society is making sense of the "economy" in which we live. But what is an economy? Existing economic theories attempt to answer this question with simple unified theories. The economy is a profit-driven market system of firms, workers, and consumers. The economy is a property system dependent upon the expropriation of surplus labor. The economy is a system of expropriation more or less designed to create great inequalities of income, wealth, and well-being. The economy is a system of exploitation and domination.
In Profit and Gift in the Digital Economy Dave Elder-Vass argues that these simple theories, largely the product of the nineteenth century, are flawed in several fundamental ways. First, they are all simple and unitary in a heterogeneous world. Economic transactions take a very wide variety of forms in the modern world. But more fundamentally, these existing theories fail completely to provide a theoretical vocabulary for describing what are now enormously important parts of our economic lives. One well-know blindspot is the domestic economy -- work and consumption within the household. But even more striking is the inadequacy of existing economic theories to make sense of the new digital world -- Google, Apple, Wikipedia, blogging, or YouTube. Elder-Vass's current book offers a new way of thinking about our economic lives and institutions. And he believes that this new way lays a basis for more productive thinking about a more humane future for all of us than is offered by either neoliberalism or Marxism.
What E-V offers is the idea of economic life as a jumble of "appropriative" practices -- practices that get organized and deployed in different combinations, and that have better and worse implications for human well-being.
From this perspective it becomes possible to see our economy as a complex ecosystem of competing and interacting economic forms, each with their own strengths and weaknesses, and to develop a progressive politics that seems to reshape that ecosystem rather than pursuing the imaginary perfection of one single universal economic form. (5)
The argument here is that we can understand the economy better by seeing it as a diverse collection of economic forms, each of which can be characterised as a particular complex of appropriative practices -- social practices that influence the allocation of benefits from the process of production. (9)
Economies are not monoliths but diverse mixtures of varying economic forms. To understand and evaluate economic phenomena, then, we need to be able to describe and analyse these varying forms in conjunction with each other. (96)
Capitalism is not a single, unitary "mode of production," but rather a concatenation of multiple forms and practices. E-V believes that the positions offered here align well with the theories of critical realism that he has helped to elaborate in earlier books (19-20) (link, link). We can be realist in our investigations of the causal properties of the economic practices he identifies.
This way of thinking about economic life is very consistent with several streams of thought in Understanding Society -- the idea of social heterogeneity (link), the idea of assemblage (link), and a background mistrust of comprehensive social theories (link). (Here is an earlier post on "Capitalism 2.0" that is also relevant to the perspective and issues Elder-Vass brings forward; link.)
The central new element in contemporary economic life that needs treatment by an adequate political economy is the role that large digital enterprises play in the contemporary world. These enterprises deal in intangible products; they often involve a vast proportion of algorithmic transformation rather than human labor; and to a degree unprecedented in economic history, they depend on "gift" transactions at every level. Internet companies like Google give free search and maps, and bloggers and videographers give free content. And yet these gifts have none of the attributes of traditional gift communities -- there is no community, no explicit reciprocity, and little face-to-face interaction. E-V goes into substantial detail on several of these new types of enterprises, and does the work of identifying the "economic practices" upon which they depend.
In particular, E-V considers whether the gift relation familiar from anthropologists like Marcel Mauss and economic sociologists like Karl Polanyi can shed useful light on the digital economy. But the lack of reciprocity and face-to-face community leads him to conclude that the theory is unpersuasive as a way of understanding the digital economy (86).
It is noteworthy that E-V's description of appropriative practices is primarily allocative; it pays little attention to the organization of production. It is about "who receives the benefits" (10) but not so much about "how activity and labor are coordinated, managed, and deployed to produce the stuff". Marx gained the greatest insights in Capital, not from the simple mathematics of the labor theory of value, but from his investigations of the conditions of work and the schemes of management to which labor was subject in the nineteenth-century factory. The ideas of alienation, domination, and exploitation are very easy to understand in that context. But it would seem that there are similar questions to ask about the digital economy shops of today. The New York Times' reportage of working conditions within the Amazon organization seems to reveal a very similar logic (link). And how about the high-tech sweat shops described in a 2009 Bloomberg investigation (link)?
Elder-Vass believes that a better understanding of our existing economic practices can give rise to a more effective set of strategies for creating a better future. E-V's vision for creating a better future depends on a selective pruning of the more destructive practices and cultivation of the more positive practices. He is appreciative of the "real utopias" project (36) (link) and also of the World Social Forum.
This means growing some progressive alternatives but also cutting back some regressive ones. It entails being open to a wide range of alternatives, including the possibility that there might be some valuable continuing role for some forms of capitalism in a more adequate mixed economy of practices. (15)
Or in other words, E-V advocates for innovative social change -- recognizing the potential in new forms and cultivating existing forms of economic activity. Marxism has been the impetus of much thinking about progressive change in the past century; but E-V argues that this perspective too is limited:
Marxism itself has become an obstacle to thinking creatively about the economy, not least because it is complicit in the discourse of the monolithic capitalist market economy that we must now move beyond.... Marx's labour theory of value ... tends to support the obsessive identification of capitalism with wage labour. As a consequence Marxists have failed to recognise that capitalism has developed new forms of making profit that do not fit with the classic Marxist model, including many that have emerged and prospered in the new digital economy. (45)
This is not a wholesale rejection of Marx's thought; but it is a well-justified critique of the lingering dogmatism of this tradition. Though E-V does not make reference to current British politics in the book, these comments seem very appropriate in appraisal of the approach to change championed by Labour leader Jeremy Corbyn.
E-V shows a remarkable range of expertise in this work. His command of recent Marxian thinking about contemporary capitalism is deep. But he has also gone deeply into the actual practices of the digital economy -- the ways Google makes profits, the incentives and regulations that sustain wikipedia, the handful of distinctive business practices that have made Apple one of the world's largest companies. The book is a work of theory and a work of empirical investigation as well.
Profit and Gift in the Digital Economy is a book with a big and important idea -- bigger really than the title implies. The book demands a substantial shift in the way that economists think about the institutions and practices through which the global economy works. More fundamentally, it asks that we reconsider the idea of "economy" altogether, and abandon the notion that there is a single unitary economic practice or institution that defines modern capitalism -- whether market, wage labor, or trading system. Instead, we should focus on the many distinct but interconnected practices that have been invented and stitched together in the many parts of society to solve particular problems of production, consumption, and appropriation, and that as an aggregate make up "the economy". The economy is an assemblage, not a designed system, and reforming this agglomeration requires shifting the "ecosystem" of practices in a direction more favorable to human flourishing.
Posted by Mark Thoma on Monday, September 5, 2016 at 12:30 PM in Economics, Methodology |
Pres coverage of the campaigns has been "bizarre":
Hillary Clinton Gets Gored, by Paul Krugman, NY Times: ...George W. Bush, was dishonest in a way that was unprecedented in U.S. politics. ... Yet throughout the campaign most media coverage gave the impression that Mr. Bush was a bluff, straightforward guy, while portraying Al Gore — whose policy proposals added up, and whose critiques of the Bush plan were completely accurate — as slippery and dishonest. Mr. Gore’s mendacity was supposedly demonstrated by trivial anecdotes, none significant, some of them simply false. No, he never claimed to have invented the internet. But the image stuck.
And right now I and many others have the sick, sinking feeling that it’s happening again.
True, there aren’t many efforts to pretend that Donald Trump is a paragon of honesty. But it’s hard to escape the impression that he’s being graded on a curve. If he manages to read from a TelePrompter without going off script, he’s being presidential. If he seems to suggest that he wouldn’t round up all 11 million undocumented immigrants right away, he’s moving into the mainstream. And many of his multiple scandals, like what appear to be clear payoffs to state attorneys general to back off investigating Trump University, get remarkably little attention.
Meanwhile, we have the presumption that anything Hillary Clinton does must be corrupt, most spectacularly illustrated by the increasingly bizarre coverage of the Clinton Foundation. ...
So I would urge journalists to ask whether they are reporting facts or simply engaging in innuendo, and urge the public to read with a critical eye. If reports about a candidate talk about how something “raises questions,” creates “shadows,” or anything similar, be aware that these are all too often weasel words used to create the impression of wrongdoing out of thin air.
And here’s a pro tip: the best ways to judge a candidate’s character are to look at what he or she has actually done, and what policies he or she is proposing. Mr. Trump’s record of bilking students, stiffing contractors and more is a good indicator of how he’d act as president; Mrs. Clinton’s speaking style and body language aren’t. George W. Bush’s policy lies gave me a much better handle on who he was than all the up-close-and-personal reporting of 2000, and the contrast between Mr. Trump’s policy incoherence and Mrs. Clinton’s carefulness speaks volumes today.
In other words, focus on the facts. America and the world can’t afford another election tipped by innuendo.
Posted by Mark Thoma on Monday, September 5, 2016 at 10:48 AM in Economics, Politics |
Posted by Mark Thoma on Monday, September 5, 2016 at 12:06 AM in Economics, Links |
Telling macro stories with micro: Don't let the equations, data, or jargon fool you, economists are avid storytellers. Our "stories" may not fit neatly in the seven universal plots but after awhile it's easy to spot some patterns. A good story paper in economics, according to David Romer, has three characteristics: a viewpoint, a lever, and a result.
Most blog or media coverage of an economics paper focuses on the result. Makes sense given the audience but buyer beware. Economists dissecting a paper spend more time on the lever, the how-did-they-get-the-result part. And coming up with new levers is a big chunk of research. The viewpoint--the underlying assumptions, the what's-central-to-the-story--tends to get short shrift. Of course, the viewpoint matters (often that's what defines a a story as economics), but it usually holds across many papers. Best to focus the new stuff.
Except when the viewpoint comes under scrutiny, then the stories can really change. ...
How much does micro matter for macro?
One long-standing viewpoint in economics is that changes in the macro-economy can largely be understood by studying changes in macro aggregates. Ironically, this viewpoint even survived macro's push to micro foundations with a "representative agent" stepping in as the missing link between aggregate data and micro theory. As a macro forecaster, I understand the value of the aggregates-only simplification. As an applied micro researcher, I am pretty sure it fails us from time to time. Thankfully, an ever-growing body of research and commentary is helping to identify times when differences at the micro level are relevant for macro outcomes. This is not new--issues of aggregation in macro go waaay back--but our levers, with rich, timely micro data and high-powered computation, are improving rapidly.
I focus in this post on differences in household behavior, particularly related to consumer spending, since that's the area I know best. And I want to discuss results from an ambitious new paper: "Macroeconomics and Household Heterogeneity" by Krueger, Mitman, and Perri. tldr: I am skeptical of their results, above all, the empirics, but I really like what they are trying to do, to shift the macro viewpoint. More on this paper below, but also want to set it in the context of macro storytelling. ...
There's quite a bit more.
Posted by Mark Thoma on Sunday, September 4, 2016 at 05:38 PM in Economics, Macroeconomics, Methodology |
Posted by Mark Thoma on Sunday, September 4, 2016 at 12:06 AM in Economics, Links |
Hal Varian writing at the IMF:
Intelligent Technology: A computer now sits in the middle of virtually every economic transaction in the developed world. Computing technology is rapidly penetrating the developing world as well, driven by the rapid spread of mobile phones. Soon the entire planet will be connected, and most economic transactions worldwide will be computer mediated.
Data systems that were once put in place to help with accounting, inventory control, and billing now have other important uses that can improve our daily life while boosting the global economy.
Computer mediation can impact economic activity through five important channels.
Data collection and analysis: ...
Personalization and customization: ...
Experimentation and continuous improvement: ...
Contractual innovation: ...
Coordination and communication: ...
Putting it all together
Today’s mobile phones are many times more powerful and much less expensive than those that powered Apollo 11, the 1969 manned expedition to the moon. These mobile phone components have become “commoditized.” Screens, processors, sensors, GPS chips, networking chips, and memory chips cost almost nothing these days. You can buy a reasonable smartphone now for $50, and prices continue to fall. Smartphones are becoming commonplace even in very poor regions.
The availability of those cheap components has enabled innovators to combine and recombine these components to create new devices—fitness monitors, virtual reality headsets, inexpensive vehicular monitoring systems, and so on. The Raspberry Pi is a $35 computer designed at Cambridge University that uses mobile phone parts with a circuit board the size of a pack of playing cards. It is far more powerful than the Unix workstations of just 15 years ago.
The same forces of standardization, modularization, and low prices are driving progress in software. The hardware created using mobile phone parts often uses open-source software for its operating system. At the same time, the desktop motherboards from the personal computer era have now become components in vast data centers, also running open-source software. The mobile devices can hand off relatively complex tasks such as image recognition, voice recognition, and automated translation to the data centers on an as-needed basis. The availability of cheap hardware, free software, and inexpensive access to data services has dramatically cut entry barriers for software development, leading to millions of mobile phone applications becoming available at nominal cost.
The productivity puzzle
I have painted an optimistic picture of how technology will impact the global economy. But how will this technological progress show up in conventional economic statistics? Here the picture is somewhat mixed. Take GDP, for example. This is usually defined as the market value of all final goods and services produced in a given country in a particular time period. The catch is “market value”—if a good isn’t bought and sold, it generally doesn’t show up in GDP.
This has many implications. Household production, ad-supported content, transaction costs, quality changes, free services, and open-source software are dark matter as far as GDP is concerned, since technological progress in these areas does not show up directly in GDP. Take, for example, ad-supported content, which is widely used to support provision of online media. In the U.S. Bureau of Economic Analysis National Economic Accounts, advertising is treated as a marketing expense—an intermediate product—so it isn’t counted as part of GDP. A content provider that switches from a pay-per-view business model to an ad-supported model reduces GDP.
One example of technology making a big difference to productivity is photography. Back in 2000, about 80 billion photos were taken worldwide—a good estimate since only three companies produced film then. In 2015, it appears that more than 1.5 trillion photos were taken worldwide, roughly 20 times as many. At the same time the volume exploded, the cost of photos fell from about 50 cents each for film and developing to essentially zero.
So over 15 years the price fell to zero and output went up 20 times. Surely that is a huge increase in productivity. Unfortunately, most of this productivity increase doesn’t show up in GDP, since the measured figures depend on the sales of film, cameras, and developing services, which are only a small part of photography these days.
In fact, when digital cameras were incorporated into smartphones, GDP decreased, camera sales fell, and smartphone prices continued to decline. Ideally, quality adjustments would be used to measure the additional capabilities of mobile phones. But figuring out the best way to do this and actually incorporating these changes into national income accounts is a challenge.
Even if we could accurately measure the number of photos now taken, most are produced at home and distributed to friends and family at zero cost; they are not bought and sold and don’t show up in GDP. Nevertheless, those family photos are hugely valuable to the people who take them.
The same thing happened with global positioning systems (GPS). In the late 1990s, the trucking industry adopted expensive GPS and vehicular monitoring systems and saw significant increases in productivity as a result. In the past 10 years, consumers have adopted GPS for home use. The price of the systems has fallen to zero, since they are now bundled with smartphones, and hundreds of millions of people use such systems on a daily basis. But as with cameras, the integration of GPS with smartphones has likely reduced GDP, since sales of stand-alone GPS systems have fallen.
As in the case of cameras, this measurement problem could be solved by implementing a quality adjustment for smartphones. But it is tricky to know exactly how to do this, and statistical agencies want a system that will stand the test of time. Even after the quality adjustment problem is worked out, the fact that most photos are not exchanged for cash will remain—that isn’t a part of GDP, and technological improvements in that area are just not measured by conventional statistics.
Will the promise of technology be realized?
When the entire planet is indeed connected, everyone in the world will, in principle, have access to virtually all human knowledge. The barriers to full access are not technological but legal and economic. Assuming that these issues can be resolved, we can expect to see a dramatic increase in human prosperity.
But will these admittedly utopian hopes be realized? I believe that technology is generally a force for good—but there is a dark side to the force (see “The Dark Side of Technology,” in this issue of F&D). Improvements in coordination technology may help productive enterprises but at the same time improve the efficiency of terrorist organizations. The cost of communication may drop to zero, but people will still disagree, sometimes violently. In the long run, though, if technology enables broad improvement in human welfare, people might devote more time to enlarging the pie and less to squabbling over the size of the pieces.
Posted by Mark Thoma on Saturday, September 3, 2016 at 10:29 AM in Economics, Productivity, Technology |
Posted by Mark Thoma on Saturday, September 3, 2016 at 12:06 AM in Economics, Links |
Should the Fed keep its balance sheet large?: I attended the Fed’s recent gathering in beautiful Jackson Hole, Wyoming...
As usual, the media were most focused on divining the next policy move of the Federal Open Market Committee (FOMC), but I found the more interesting (and ultimately more consequential) discussions were about the Fed’s longer-term policy framework, the theme of the conference. In this post I’ll report on one important debate: the question of the optimal long-run size of the Fed’s balance sheet. It seemed to me that the strongest arguments made at the conference supported a strategy of keeping the balance sheet large (though comparable to other major central banks), rather than shrinking it to its pre-crisis level as the FOMC currently plans to do. ...
Overall, I think the FOMC’s plan to return to a pre-2008 balance sheet and the associated operating framework needs more thought. The appropriate size and composition of the Fed’s balance sheet inevitably depends on a range of complex decisions about the management of monetary policy and the role of the central bank in preventing and responding to financial crises. We’ve learned a lot about both areas since the crisis, and some important arguments have emerged for keeping the balance sheet larger than in the past. Maybe this is one of those cases where you can’t go home again.
[The full post explains why he believes that the balance sheet should be larger than in the past.]
Posted by Mark Thoma on Friday, September 2, 2016 at 11:00 AM in Economics, Monetary Policy |
Job Growth Slows In August: The Labor Department reported that the economy created 151,000 new jobs in August — slightly less than generally expected. The unemployment rate was unchanged at 4.9 percent and the employment-to-population ratio (EPOP) was also unchanged. ...
While the overall pace of job growth is still reasonably healthy even with the slowdown, a disconcerting item is a decline in the duration of the average workweek. This stood 34.3 hours in August, down from 34.4 hours in July and 34.6 hours in August of 2015. The drop was large enough to lead to a decline of 0.2 percent in the index of aggregate weekly hours, in spite of the growth in employment.
This downward trend could indicate slower hiring in the future. It also seems to contradict the common assertion in the business press that employers are having difficulty finding qualified workers. If this were true, they would be pushing the workers they have to work longer hours.
Wage growth also shows no evidence of accelerating. The average hourly wage increased by 2.4 percent over the last year. Over the last three months, compared with the prior three months, the average hourly wage increased at a 2.5 percent annual rate.
On the household side, the news was mostly positive. There was an increase in the percentage of unemployment due to voluntary quits to 11.3 percent. Although this is the highest level for the recovery, it’s a full percentage point below the pre-recession peak and almost 4.0 percentage points below the peak reached in 2000.
All the duration measures of unemployment fell in the month. With the mean duration of unemployment spells dropping by 0.5 weeks to 27.6 weeks and the median duration falling by 0.4 weeks to 11.2 weeks. And there was a rise in the percentage of black teens with jobs to 23.3 percent, an increase of 2.7 percentage points from the July figure and a new high for the recovery.
There was an increase of 113,000 in the number of people involuntarily working part-time, although this figure is still down by 428,000 from the year ago level. The number choosing to work part-time dropped in August, but is still up by 751,000 from last year’s level. ...
While the overall EPOP was unchanged in August, the EPOP for prime age workers (ages 25–54) edged down by 0.2 percentage points to 77.8 percent. This puts it 2.5 percentage points below its pre-recession peak and more than four full percentage points below the 2000 peak.
On the whole the August report suggests a moderately healthy labor market, but one that is not reaching any constraints. With the EPOP still well below pre-recession levels, there are still many potential workers who would like jobs. Similarly, the recent drop in hours suggests that firms are not straining to find workers as does the data showing wage growth is maintaining a moderate pace.
See also: Calculated Risk, Jared Bernstein.
Posted by Mark Thoma on Friday, September 2, 2016 at 10:38 AM in Economics, Unemployment |
"Poisoning kids is a partisan issue":
Black Lead Matters, by Paul Krugman, NY Times: Donald Trump is still claiming that “inner-city crime is reaching record levels,” promising to save African-Americans from the “slaughter.” In fact, this urban apocalypse is a figment of his imagination; urban crime is actually at historically low levels. But he’s not the kind of guy to care about another “Pants on Fire” verdict from PolitiFact.
Yet some things are, of course, far from fine in our cities, and there is a lot we should be doing to help black communities. We could, for example, stop pumping lead into their children’s blood. ... Like it or not, poisoning kids is a partisan issue. ...
I’ve just been reading a new study ... confirming the growing consensus that even low levels of lead in children’s bloodstreams have significant adverse effects on cognitive performance. And lead exposure is still strongly correlated with growing up in a disadvantaged household. ...
What with everything else filling the airwaves, it may be hard to focus on lead poisoning, or environmental issues in general. But there’s a huge difference between the candidates, and the parties, on such issues. And it’s a difference that will matter whatever happens to Congress: A lot of environmental policy consists in deciding how to apply existing laws, so that if Hillary Clinton becomes president, she can have substantial influence even if she faces obstruction from a Republican Congress.
And the partisan divide is exactly what you would expect.
Mrs. Clinton has pledged to “remove lead from everywhere” within five years. She probably wouldn’t be able to get Congress to pay for that ambitious an agenda, but everything in her history, especially her decades-long focus on family policy, suggests that she would make a serious effort.
On the other side, Mr. Trump — oh, never mind. He rants against government regulations of all kinds, and you can imagine what his real estate friends would think about being forced to get the remaining lead out of their buildings. Now, maybe he could be persuaded by scientific evidence to do the right thing. Also, maybe he could be convinced to become a Buddhist monk, which seems about equally likely.
The point is that the divide over lead should be seen not just as important in itself but as an indicator of the broader stakes. If you believe that science should inform policy and that children should be protected from poison, well, that’s a partisan position.
Posted by Mark Thoma on Friday, September 2, 2016 at 10:28 AM in Economics, Environment, Politics |
Posted by Mark Thoma on Friday, September 2, 2016 at 12:06 AM in Economics, Links |
Thoughts Ahead Of The Employment Report, by Tim Duy: The August employment report has come to be seen as the deciding factor in the Fed's upcoming decision on rates. See Sam Fleming at the Financial Times here. Maybe this is the case, maybe not. I hope not. Hinging policy on the first print of nonfarm payrolls - a volatile, heavily revised number - would be pretty low quality policy making.
I keep coming back to this by Federal Reserve Chair Janet Yellen from back in December:
...total real private domestic final purchases (PDFP)--which includes household spending, business fixed investment, and residential investment, and currently represents about 85 percent of aggregate spending--has increased at an annual rate of 3 percent this year, significantly faster than real GDP. Household spending growth has been particularly solid in 2015, with purchases of new motor vehicles especially strong.
This was Yellen's way of justifying a rate hike last December in spite of faltering GDP numbers. Trouble is that PDFP continued a downward slide since then:
Final sales here are off roughly 1.5 percentage points from their cycle highs. That is a nontrivial swing. It is no wonder that job growth accelerated in 2013-14 and then decelerated in 2015:
I tend to think there is room for some further deceleration. Note too that progress on reducing underemployment slowed markedly:
and the unemployment rate is flattening out:
Now, you might say that the Fed needs to hike because wages are rising. But I would say that wages are a lagging indicator
and are likely to continue rising even after a recession begins. Overly shifting the policy focus to wage growth would a red flag in my opinion. I think the Fed tends to focus too much on lagging indicators in the later stages of a business cycle while ignoring their own policy lags. The end result is overly tight policy.
So when I look at the data, I don't see that the August employment report should be a critical factor in a rate hike decision. I think the critical factors should be the Fed's confidence that growth is set to rebound in the second half of the year and the balance of policy risks.
On the first point, while early signals on growth are positive - see the Atlanta Fed GDPNow measure, for example - they are still just early signals. And today's ISM release doesn't indicate that a manufacturing rebound is right around the corner, so maybe that rebound in investment spending just might take more time as well. And auto sales look to have peaked and are flattening out, so that is not likely to be a source of growth and might be slight drag. So, overall, I don't think we have enough data to be confident that growth will rebound just yet.
Regarding the balance of policy risks, that asymmetry has not magically gone away. The Fed has less room to ease than tighten. And inflation remains mired below target:
So I don't see that that the basic calculus here has changed. If the Fed errors by being too loose now, they have plenty of wiggle room on inflation and policy to respond. If they error on by being too tight, they don't have much policy room and they risk holding inflation below 2 percent for another decade. What's that going to do for inflation expectations?
All that said, there appears to be a movement among FOMC members to minimize the asymmetry of the policy risks. First you have Federal Reserve Vice Chair Stanley Fischer arguing that inflation is close enough to target that it shouldn't be a concern. Via Greg Robb at MarketWatch:
And the core measure of the personal consumption expenditure index — the Fed’s favorite measure of inflation — at 1.6% “is within hailing distance” of the central bank’s 2% target, Fischer added.
I starting to think Fischer is still living in the 1970s. But perhaps more disconcerting is Yellen's final line from her Jackson Hole speech:
But even if average interest rates remain lower than in the past, I believe that monetary policy will, under most conditions, be able to respond effectively.
She is playing down the asymmetric policy risk issue here. Given the experience of the past decade, she is way too complacent in my opinion. And her complacency hinges on the assumption that they now know they (nominal) natural rate of interest is 3 percent. But that has been a moving target. And I don't think that is the signal being sent by the long end of yield curve.
Then there is the financial stability argument. All I will say on that is the Fed had better be damn certain that they are facing a real risk to the economy before they pull the trigger on that argument. And I don't see how they can be that certain.
Bottom Line: Regardless of the outcome of the employment report, good or bad, I don't see good case for moving next this month. Too many questions about the forecast, and they still face persistently low inflation and asymmetric policy risks. But all that said, there seems to be a large swath of voting members ready to get behind a rate hike. I think the low odds on a rate hike in September is the market's way of telling the Fed that if they do hike, it would be a mistake.
Posted by Mark Thoma on Thursday, September 1, 2016 at 01:21 PM in Economics, Fed Watch, Monetary Policy |
We Need Forceful Policies to Avoid the Low-Growth Trap: ... Forceful policy actions are needed to avoid what I fear could become a low-growth trap. Here are the key elements of a global growth agenda as I see them:
- The first element is demand support in economies that operate below capacity. In recent years, this task has been delegated mostly to central banks. But monetary policy is increasingly stretched, as several central banks are operating at or close to the effective lower bound for policy rates. This means fiscal policy has a larger role to play. Where there is fiscal space, record-low interest rates make for an excellent time to boost public investment and upgrade infrastructure.
- The second element is structural reforms. Countries are not doing nearly enough in this area. Two years ago, the members of the G-20 pledged reforms that would lift their collective GDP by an additional two percent over 5 years. But in the most recent assessment, the measures implemented to date are worth at most half this amount—so more reforms are urgent. IMF research shows that reforms are most effective when they are prioritized along countries’ reform gaps and take into account the level of development and position in the business cycle.
- The third element is reinvigorating trade by reducing trade costs and rolling back temporary trade barriers. It is easy to blame trade for all the ills afflicting a country—but curbing free trade would be stalling an engine that has brought unprecedented welfare gains around the world over many decades. However, to make trade work for all, policymakers should help those who are adversely affected through re-training, skill building, and assisting occupational and geographic mobility.
- Finally, policies need to ensure that growth is shared more broadly. Taxes and benefits should bolster incomes at the low end and reward work. In many emerging economies, stronger social safety nets are needed. Investments in education can raise both productivity and the prospects of low-wage earners.
It takes political courage to implement this agenda. But inaction risks reversing global economic integration, and therefore stalling an engine that, for decades, has created and spread wealth around the globe. This risk is, in my view, too large to take.
Posted by Mark Thoma on Thursday, September 1, 2016 at 11:32 AM in Economics |
Thomas Belsham at Bank Underground:
When asset managers go MAD: What do the Cold War powers of the United States and the USSR have in common with modern day asset managers? The capacity for mutually assured destruction. During the 1950s game theorists described a model of strategic interaction to demonstrate how it might be that two nations would choose to annihilate each other in nuclear conflict. Simply put, each nation had an incentive to strike first, as there was no incentive to retaliate. Both would race to push the button. Asset managers face a similar set of incentives.
The strategic interaction of these two superpowers was subsequently formalised as the “prisoner’s dilemma” by Albert W. Tucker. In the prisoner’s dilemma, two rational decision makers choose to pursue an uncooperative course of action that is detrimental to both, rather than cooperate to arrive at a preferable one. The reason the two parties can arrive at such an outcome is that for each individual, it is always better not to cooperate than to cooperate, regardless of the course taken by the other.
In the original thought experiment, there are two prisoners facing conviction of a crime, but they are suspected of a greater one. The sentence for the less serious offence is two years. Each is offered a deal: snitch on your friend for the greater of the two crimes and you can go free. But your friend gets seven years. If both snitch, both gets five years. So, to snitch or not to snitch?
What is clear is that it is better for both to keep quiet, and get two years, than for both to snitch, and get five each. Yet snitching is the ‘rational’ outcome; it’s the best strategy for me regardless of what my partner does (notwithstanding the risk of reprisals after the seven years I’ve spent in the Bahamas while my partner been behind bars).
Table 1: The prisoner’s dilemma
But what does that have to do with the asset management industry? Well, arguably, because it may, under some circumstances, be possible to characterise the coordination problem faced by asset managers as a prisoner’s dilemma. That stems from two factors. The first, relates to the high degree of concentration of the asset management industry – which results in potential spillovers from the actions of one asset manager to the payoffs of others. The second, relates to incentives arising from the practice of using peer comparison across individual asset managers to monitor performance – which make relative payoffs important.
The combination of these two features of the market could give rise to a situation in which, in period of financial stress, when there are concerns about falls in asset prices, rather than hold one’s nerve and stand pat, individual asset managers might reason that it is preferable to sell instead. If all asset managers reason thus, the resulting rush for the exit – and downward pressure on asset prices – could result in considerably bigger losses for everyone, than if asset managers had coalesced on the cooperative outcome.
Putting this interaction in a formal context, in the asset manager game below, in a period of financial market stress, anticipating that there may be a fall in asset prices, each player can either hold his (or her) position, or sell. In the event that both hold, each receives a loss of 1. But each also has the option of selling, in the hope of being first out of the door. If successful, that player reduces his own loss to 0, but increases the loss suffered by the other player to 2. If both players sell, this is the worst outcome of all (each loses 3).
Table 2: The asset manager’s dilemma
(players’ relative payoffs are shown in red)
Now, this isn’t quite the same as the prisoner’s dilemma above. In the classic example, the bad outcome occurs because it makes sense not to cooperate no matter what the other player chooses. Here, it is possible that both players will be greedy and sell – gambling that the other will hold. But if one player knows that the other will sell, it would not be rational to sell as well. This would increase that player’s own loss from 2 to 3. The greedy player will simply be allowed to get away with it. So what would make it in the interest of player 1 to make losses even worse by choosing to sell as well?
This is where the practice of benchmarking comes into play. Asset managers tend to monitor performance against each other or against common benchmarks, to help investors compare investment propositions. As a result, benchmarking creates an externality in which the performance of one’s peers, affects one’s own payoffs. And it becomes in an individual asset manager’s best interest to minimise deviation from the rest of the pack – because his (or her) reputation and ability to raise a new fund and operate henceforth are a function of relative performance.
Now, in ‘good’ states of the world (the cooperative outcome, or the outcome in which the greedy gamble pays off) the externality does not affect behaviour: if an asset manager cooperates, or successfully cheats, his performance is either as good, or better than that of his peers. “Look at me! I’m doing at least as well as the other guy!” Happily, here, the asset manager’s incentives are aligned with those of the investor.
But, importantly, in ‘bad’ states (non-cooperation, or being cheated on), if one’s opponent is cheating, it is preferable to cheat as well, and both incur a big loss, than to be cheated on and incur a smaller loss. Better that, than stand out as an underperformer and risk losing one’s livelihood. “Well, we all did terribly. See you at the fundraiser!” As a result, selling will be that much more widespread, and asset price falls that much bigger, than otherwise. The asset manager’s incentives are not aligned with those of the investor.
Formally, each individual player’s preferred outcome is to cheat successfully (A). And each player prefers small losses (B) to big losses (C). But each would also prefer that both incur a big loss, than see the other profit at his own expense (D). A is preferred to B, B to C, and C to D. This is the general form of the classic prisoner’s dilemma (Table 3). Regardless of the decision of Player 1, it is in the interest of Player 2 to sell. For the investor in the asset manager game, however, it is clear that this represents a pretty miserable outcome.
Table 3: The general form of the prisoner’s dilemma
The question then, is how to align the incentives of the asset manager with those of the investor? The cooperative outcome becomes more likely, if each player’s rewards reflect the absolute return generated by the fund, rather than performance relative to benchmark. This at least takes away some of the incentive to engage in mutually disadvantageous strategies, although greed on its own is still sufficient to yield the ‘sell, sell’ result.
To reduce the likelihood of that from happening, the academic literature on the prisoner’s dilemma suggests that in games where strategies are pursued on a probabilistic basis, a cooperative equilibrium becomes possible. Mechanisms which lower the probability of ‘sell’ might help to nudge players towards the jointly preferred outcome.
Moreover, cooperative outcomes also sometimes result from repeated games of the prisoner’s dilemma, provided that the end point is not known (otherwise players reason that it makes sense to cheat on the last game, and knowing that the other player will reason thus, infer that it will also make sense to cheat in the penultimate game, and so on, right back to the very first interaction). So perhaps we should embrace opportunities for players to arrive at the cooperative outcome; a little volatility may not be a bad thing.
Posted by Mark Thoma on Thursday, September 1, 2016 at 11:32 AM in Economics, Financial System, Market Failure |
Posted by Mark Thoma on Thursday, September 1, 2016 at 12:06 AM in Economics, Links |