Category Archive for: Technology [Return to Main]

Thursday, December 31, 2015

'Beating a Dead Robotic Horse'

Dietz Vollrath:

Beating a Dead Robotic Horse: One of the recurring themes on this blog has been the consequences of robots, AI, or rapid technological change on labor demand. Will humans be put out of work by robots, and will this mean paradise or destitution? I’ve generally argued that we should be optimistic about robots and AI and the like, but others have made coherent arguments for pessimism. I spent a chunk of this week reading over posts, both new and old, and thinking more about these positions.

If there is one distinct difference between the robo-pessimist and robo-optimist view, it is almost exclusively down to timing. The pessimists are worried that the rapid decline of human labor is occurring now, and in many cases has been occurring for a while already. The optimists believe that we have time in front of us to sort things out before human labor is replaced en masse.

Brynjolfsson and McAfee‘s latest is a good example of this robo-optimist view. They concede that human labor is in danger of being replaced... But at the same time they do not think this is imminent...

On the robo-pessimism side, Richard Serlin has a mega-post about the declining prospects for human labor and the possible consequences. What is interesting about Richard’s post is that he essentially makes the case that the replacement of human labor by automation has been occurring for decades; we are already living with it...

I think it is helpful to get beyond the binary viewpoints. ...

I tend to be a weak robo-optimist. I, like Brynjolfsson and McAfee, completely agree that robots/AI will create a drag on the demand for human labor, and in particular unskilled labor. My robo-optimism isn’t a belief about technology. It is a belief that we can figure out how to manage the glide path towards shorter work hours while maintaining living standards for everyone. It’s a good thing that we’ll have to work less.

And there remains a little piece of strong robo-optimism lurking inside of me. I don’t think work less is really well defined. We will likely have to spend less time working for wages to afford the basic material goods in our lives. But that doesn’t mean we won’t spend lots of our time “working” for each other doing other things. Whether that work is paid in wages or not is immaterial.

[There's quite a bit more in the post that I left out.]

Thursday, December 17, 2015

'How Socioeconomic Status Impacts Online Learning'

 Are MOOCs the answer to educational inequality?:

How Socioeconomic Status Impacts Online Learning: The driving force behind the increasing popularity of massive open online courses (MOOCs) is that they provide — as the term defines it — open access to a massive online audience. Anyone with an Internet connection who wants to learn, can. Whether you’re rich or poor, living in a New York City high-rise or a remote Nepalese village, MOOCs promise to level the higher education playing field. The question is: Does reality reflect this ideal?
A new research study by MIT education researcher Justin Reich and Harvard University’s John Hansen seeks the answer. “Democratizing Education? Examining Access and Usage Patterns in Massive Open Online Courses” takes a close look at how socioeconomic resources influence MOOC enrollment and course completion — and whether online learning is truly opening as many doors as anticipated.
“One way we might democratize education would be to provide more widespread access to academic experiences previously reserved for the elite,” explains Reich, who is the executive director of MIT's PK-12 Initiative. “But historically, emerging learning technologies — even free ones — have often benefited people with the social, technical, and financial capital to take advantage of new innovations. As we try to bridge the digital divide, we need to carefully examine how new tools are used by learners from different walks of life.” ...
Reich’s study uses three indicators: parental educational attainment, neighborhood average educational attainment, and neighborhood median income.
The research finds that these indicators are correlated with student enrollment and success in MOOCs, especially among younger students. Young students enrolling in HarvardX and MITx on edX live in neighborhoods where the median income is 38 percent higher than typical American neighborhoods. Among teenagers who register for a HarvardX course, those with a college-educated parent have nearly twice the odds of finishing the course compared to students whose parents did not complete college. At exactly the ages where online learning could offer a new pathway into higher education, already affluent students are more likely to enroll in a course and succeed.
The takeaway is that MOOCs have not yet solved SES-related disparities in educational outcomes, and Reich believes it’s critical to turn these learnings into actions in order to narrow the gaps between MOOC perception and reality.
“MOOCs and other forms of online learning don’t yet live up to their promise to democratize education,” he says. “Closing this digital divide is exactly the kind of grand challenge that the world’s greatest universities should be tackling head on.”

Friday, November 20, 2015

'Some Big Changes in Macroeconomic Thinking from Lawrence Summers'

Adam Posen:

Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.

The  full text explains these points in more detail (I left out one point on the measurement of productivity).

Tuesday, November 10, 2015

Calibrating the Hype about Online Higher Education

This is from Tim Taylor:

Calibrating the Hype about Online Higher Education: "Massive open online courses" (MOOCs) and other aspects of online higher education were white-hot a few years ago, but I'd say that they have cooled off to only red-hot. Two economists who have also been college presidents, Michael S. McPherson and Lawrence S. Bacow discuss the current state of play and offer some insights in "Online Higher Education: Beyond the Hype Cycle," appearing in the Fall 2015 issue of the Journal of Economic Perspectives. Here are some points that caught my eye.

About one-quarter of higher education students took an online course in 2013, and about one-ninth of higher education students took all of their courses online that year. 

"The US Department of Education recently began to conduct its own survey of online education as part of its Integrated Post-Secondary Education Data System (IPEDS), with full coverage of the roughly 4,900 US institutions of higher education. As shown in Table 1, IPEDS data indicates that as of 2013, about 26 percent of all students took at least one course that was entirely online, and about 11 percent received all of their education online." 

When it comes to the possibility of education technologies that can operate at large scale with near-zero marginal costs, there's a history of overoptimism. Here's a quick sketch of promises about educational radio, and then educational television. 

"Berland (1992), citing a popular commentator named Waldeman Kaempffert writing in 1924, reported that “there were visions of radio producing ‘a super radio orchestra’ and ‘a super radio university’ wherein ‘every home has the potentiality of becoming an extension of Carnegie Hall or Harvard University.’” Craig (2000) reports that “the enthusiasm for radio education during the early days of broadcasting was palpable. Many universities set up broadcast stations as part of their extension programs and in order to provide their engineering and journalism students with experience in radio. By 1925 there were 128 educational stations across the country, mostly run by tertiary institutions” (p. 2831). The enthusiasm didn’t last—by 1931 the number of educational stations was down to 49, most low-powered (p. 2839). This was in part the result of cumbersome regulation, perhaps induced by commercial interests; but the student self-control problem ... likely played a role as well. As NBC representative Janice Waller observed, “Even those listeners who clamored for educational programs, Waller found, secretly preferred to listen to comedians such as Jack Benny. These “intellectually dishonest” people “want to appear very highbrow before for their friends . . . but down inside, and within the confines of their own homes, they are, frankly, bored if forced to listen to the majority of educational programs” (as quoted in Craig 2000, pp. 2865–66). 

"The excitement in the late 1950s about educational television outshone even the earlier enthusiasm for radio. An article by Schwarzwalder (1959, pp. 181–182) has an eerily familiar ring: “Educational Television can extend teaching to thousands, hundreds of thousands and, potentially, even millions. . . . As Professor Siepman wrote some weeks ago in The New York Times, ‘with impressive regularity the results come in. Those taught by television seem to do at least as well as those taught in the conventional way.’ . . . The implications of these facts to a beleaguered democracy desperately in need of more education for more of its people are immense. We shall ignore these implications at our national peril.” Schwartzman goes on to claim that any subject, including physics, manual skills, and the arts can be taught by television, and even cites experiments that show “that the discussion technique can be adapted to television.”"

The Internet offers the possibility not just of widespread distribution of education material, but also of interactive content. But if the content is to be richly interactive--that is, more than just a short multiple-choice quiz inserted into the recorded material--the costs of design and production could be very substantial. 

"Richly interactive online instruction is obviously much more expensive than Internet-delivered television. The development costs for Carnegie Mellon’s sophisticated but far from fully computer-adaptive courses in statistics and other fields have been estimated at about $1 million each (Parry 2009). Although future technical developments will reduce the costs of providing a course of a fixed level of quality over time, those future technical developments will also encourage the provision of additional features. Universities can invest in improving the production values of such television programs at the margin in ways that range from multiple camera angles to the incorporation of sophisticated graphics and live location video. Many interactive courses could also conceivably benefit from regular updating based on recent events or scholarship ... Our point is that while online courses offer the potential for constant modification and updates, realizing this potential may in fact be expensive, leading to less-frequent updates than for traditionally taught subjects. ... Those who foresee the widespread adoption of adaptive learning technology often underestimate the cost of producing it. Stanford President John Hennessey, in a recent lecture to the American Council of Education, estimated that the cost of producing a first-rate highly interactive digital course to be in the millions of dollars (Jaschik 2015). Few individual institutions have the resources to make such investments. Furthermore, while demand may be substantial enough to support such investments for basic introductory courses in fields that easily lend themselves to such instruction, it is unlikely that anyone will invest in the creation of such courses for upper-level courses unless they can be adopted at scale.

There's no guarantee that online tools will reduce the costs of higher education. One possibility is that well-endowed universities use online higher education as a way to drive up costs--since these schools often compete to provide a high-end experience. For example, expensive schools might "flip the classroom" by paying for both a rich and interactive online course, and then also hiring enough faculty members (not graduate students!) to staff a large number of discussion and problem-solving sections.

Indeed, there is a real chance that at least in selective higher education, technology will actually be used to raise rather than lower cost. There are obvious ways to use online materials to complement rather than to substitute for in-person instruction. Flipping the classroom, as we will explain further, is one. Instructors can also import highly produced video material—either purchased or homemade— to complement their classes, and there could easily emerge a market in modular lessons aimed at allowing students to extend material farther or to get a second take on a difficult set of concepts. If individual faculty members are authorized to make these choices, and universities agree to subsidize expensive choices, costs seem likely to rise. 

Conversely, schools that are lower-ranked and with fewer financial resources may be pushed to focus on implementing a low-cost and mostly online curriculum. 

Broad-access unselective institutions are already among the largest users of online instruction. These institutions are responsible for the education of many students—at least half of all those enrolled in postsecondary education—and they disproportionately educate lower-income students and students of color. Enabling technological advances to support improvement in the educational success of these institutions at manageable cost is an important goal, arguably the most important goal for using technology to improve American higher education. (Of course, the implications of these technologies for global learning would be potentially gigantic.) There is especially high potential for online education to cater to the large number of nontraditional students, which includes adult learners and those who have a very high opportunity cost of attending college whether at the undergraduate or graduate level. For this group of students, asynchronous online learning can be a godsend. Opportunities surely exist for technology to penetrate this market further, and quality is likely to improve as faculty and others figure out how to take better advantage of new educational technology. As the technology improves and as more institutions adopt it, more of these students are likely to receive all or at least some of their education online. 

Yet this great opportunity is accompanied by considerable risk. It is all too easy to envision legislators who see a chance to cut state-level or national-level spending that supports higher education by imposing cheap and ineffective online instruction on institutions whose students lack the voice and political influence to demand quality. It’s equally easy to imagine for-profit institutions proffering online courses in a way that takes advantage of populations with little experience with college in a marketplace where reliable information is scarce.

(Full disclosure: I've been Managing Editor of the Journal  of Economic Perspectives since 1987. All JEP articles from the current issue going back to the first issue are freely available online courtesy of the publisher, the American Economic Association.)

Saturday, November 07, 2015

'Economic Policy Splits Democrats'

Anyone think this is correct?:

Economic Policy Splits Democrats, WSJ: The old guard of a party that laid the groundwork for the election of a two-term president watches with unease at what’s happening to their electoral prospects and economic policy proposals. ...
That alarm shines through in a new 52-page report from centrist Democratic think tank the Third Way...
“The right cares only about growth, hoping it will trickle down,” says Jonathan Cowan, president of Third Way. The left, meanwhile, is too focused on “redistribution to address income inequality.”
Third Way says a better agenda focuses on growth by promoting skills, job growth and wealth creation without adding to deficits or raising taxes on the middle class. Its report outlines a series of policies it says can do this...
The gist of the report concludes that the economic problems facing the American middle class have less to do with unfairness—or the idea that the system is fundamentally “rigged” against workers—and more to do with technological and globalization forces that can’t be reversed.

[That statement will drive Larry Mishel nuts.]

The report spotlights a divide on the left in both substance and style. ...
Progressives want to see a more fundamental rewrite of the rules to break up political power, on par with President Theodore Roosevelt‘s “trust-busting” of a century ago. “This country is in real trouble,” Ms. Warren said at the May event. “The game is rigged and we are running out of time.”
That kind of rhetoric gives Mr. Cowan fits because he says it isn’t a winning political message. ...
He says that leading economic ideas on the left, including advocacy for a $15 minimum wage, expanded Social Security benefits and a single-payer health-care system, won’t play well with independent voters. The report cites focus group research in advancing its argument that Americans, particularly independents and moderate voters, are more anxious than they are angry about these changes.
Third Way cites the failures of main street icons such as Kodak, Borders Books and Tower Records as proof that new technologies and delivery systems, as opposed to a “stacked deck” in Washington, are primarily responsible for economic upheaval.

Tower Records explains inequality? Seriously? From Larry Mishel (linked above):

Many economists contend that technology is the primary driver of the increase in wage inequality since the late 1970s, as technology-induced job skill requirements have outpaced the growing education levels of the workforce. The influential “skill-biased technological change” (SBTC) explanation claims that technology raises demand for educated workers, thus allowing them to command higher wages—which in turn increases wage inequality. A more recent SBTC explanation focuses on computerization’s role in increasing employment in both higher-wage and lower-wage occupations, resulting in “job polarization.” This paper contends that current SBTC models—such as the education-focused “canonical model” and the more recent “tasks framework” or “job polarization” approach mentioned above—do not adequately account for key wage patterns (namely, rising wage inequality) over the last three decades.

So, should I adopt a message I don't think is true because it sells with independents who have been swayed by Very Serious People, or should I say what I believe and try to convince people they are barking up the wrong tree? (For the most part anyway, I believe both the technological/globalization and institutional/unfairness explanations have validity -- but how do workers capture the gains Third Way wants to create through growth and wealth creation without the bargaining power they have lost over time with the decline in unionization, threats of offshoring, etc.? That's the bigger problem.) It is unfair when, say, economic or political power redirects income away from those who created it to those who did not (I am using the normative equity principle that each person has a right to keep what he or she produces, to reap what they have sowed, and I have little doubt that workers have been paid less than their productivity, and those at the top more. That's unfair, and redirecting income -- redistributing if you will -- to those who actually earned it is not harmful. It is just, and it creates the correct economic incentives). Wealth creation/growth has not been the biggest problem over the last four decades (i.e. since inequality started to increase), it is how the gains have been distributed. I'd rather convince people of the truth that more growth and more wealth creation won't solve the problem if we don't address workers' bargaining power at the same time than gain their support by patronizing their views. In the meantime redistributing income from those who didn't earn it to those who did can serve as a temporary solution until we get the more fundamental underlying problems fixed (e.g. level the playing field on bargaining power between workers and firms).

Maybe politicians have to tell people what they want to hear, I'll let them figure that out, but I will continue to call it as I see it even if "independents and moderate voters are more anxious than they are angry about these changes." That won't change if we play into those anxieties instead of explaining why new approaches are needed, and explaining how they will benefit from a system that does a better job of rewarding hard work instead of ownership, connections, and power.

Saturday, October 03, 2015

'The Romer Model Turns 25'

Joshua Gans:

The Romer Model turns 25: 25 years ago this month Paul Romer‘s paper, “Endogenous Technological Change” was published in the Journal of Political Economy. After over 20,000 citations, it is one of the most influential economics papers of that period. The short version of what that paper did was to provide a fully specified model whereby technological change (i.e., the growth of productivity) was driven not be outside (or exogenous) forces but, instead, by the allocation of resources to knowledge creation and with a complete description of the incentives involved that provided for that allocation. Other papers had attempted this in the past — as outlined in David Warsh’s great book of 2006 — and others provided alternatives at the same time (including Aghion, Howitt, Grossman, Helpman, Acemoglu and Weitzman) but Romer’s model became the primary engine that fueled a decade-long re-examination of long-term growth in economics; a re-examination that I was involved in back in my student days.
Recently, Romer himself has taken on others who, more recently, have continued to provide models of endogenous economic growth (most notably Robert Lucas) for not building on the work of himself and others that grounded the new growth theory in imperfect competition but instead trying to formulate models based on perfect competition instead. I don’t want to revisit that issue here but do want to note that “The Romer Model” is decidedly non-mathy. As a work of theoretical scholarship, every equation and assumption is carefully justified. The paper is laid out with as much text as there is mathematics. And in the end, you know how the model works, why it works and what drives its conclusions. ...

After explaining the contributions in detail, he also covers:

So why has work in this area somewhat petered out? ...

And ends with:

In summary, the Romer model was a milestone and led to much progress. It is a stunningly beautiful work of economic theory. But there is more to be done and my hope is we will see that happen in the future as the cumulative process that drives new knowledge can drive new economic knowledge as well.

Friday, September 25, 2015

''Technological Progress Anxiety: Thinking About 'Peak Horse' and the Possibility of 'Peak Human'''

Brad DeLong:

Technological Progress Anxiety: Thinking About “Peak Horse” and the Possibility of “Peak Human”, by by Brad DeLong: Another well-written piece by an authorial team led by the very sharp Joel Mokyr–The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?–that in my mind fails to wrestle with the major question, and so leaves me unsatisfied.

Hitherto,... every form of non-human power that substitutes and thus tends to reduce the value of human backs and thighs...–from the horse to the watermill to the steam engine to the diesel to the jet engine–and every single source of manipulation–from the potter’s wheel to the loom to the spinning jenny to the assembly line to the mechanized factory–has required a cybernetic control mechanism. Without such a mechanism, machines are useless. They cannot keep themselves on course and on track. ... And as cybernetic control mechanisms human brains had an overwhelming productivity edge.

The fear is that this time things really are different. The fear is that, this time, technological anxiety is not misguided... For the first time, we find our machines substituting not for human backs, things, eyes, and hands, but for human brains. ... And this factor is offset only by the hope that our machines will reduce the market value of commodities faster than they reduce the value of the median worker’s labor...

I must say that I really do wish that Mokyr et al. had included, in their paper, a discussion of “peak horse”.

A standard economists’ argument goes roughly like this: Technology is introduced only when it is profitable, and lowers the costs of production. Thus the prices of the goods and services produced must go down, leaving consumers with more money to spend on other products, and this creates demand for any workers who are displaced. Thus there will always be new industries growing up to employ any workers displaced by technological change in existing industries.

But that argument applies just as well to the oats, apples, and grooming needed for horses to subsist as for the wages of humans, no? One could ... just as easily have said, a century ago, that: “Fundamental economic principles will continue to operate. Scarcities will still be with us…. Most horses will still have useful tasks to perform, even in an economy where the capacities of power sources and automation have increased considerably…”

Yet ... “Peak horse” in the U.S. came in the 1910s, I believe. After that there was no economic incentive to keep the horse population of America from declining sharply, as at the margin the horse was not worth its feed and care. And in a marginal-cost pricing world, in which humans are no longer the only plausible source of Turing-level cybernetic control mechanisms, what will happen to those who do not own property should the same come to be true, at the margin, of the human? What would “peak human” look like? Or–a related but somewhat different possibility–even “peak male”?

Sunday, September 13, 2015

'The Jobs that AI Can't Replace'

The pickings are a bit scant so far today. You'd think it was the weekend or something. Here's something from Erik Brynjolfsson & Andrew McAfee:

The jobs that AI can't replace,BBC News: Current advances in robots and other digital technologies are stirring up anxiety among workers and in the media. There is a great deal of fear, for example, that robots will not only destroy existing jobs, but also be better at most or all of the tasks required in the future.
Our research at the Massachusetts Institute of Technology (MIT) has shown that that's at best a half-truth. While it is true that robots are getting very good at a whole bunch of jobs and tasks, there are still many categories in which humans perform better. ...

Saturday, September 12, 2015

Is the Pace at Which Labor-Saving Technology is Entering the Workforce Accelerating?

Jared Bernstein:

Back to the Future: While I’m wide open to evidence that I’m wrong, I’ve been skeptical of the claim that the robots are coming for our jobs. To be technical, the economics question is this: is the pace at which labor-saving technology is entering the workforce accelerating? ...
There are various pieces of evidence suggesting that the answer is “no.” Most importantly, if the rate at which machines are replacing workers is increasing, then productivity growth—output/hours worked—should also be increasing. But it has been slowing.
One reason for slower productivity growth is diminished investment in capital goods—like machines—a trend that also doesn’t square with the acceleration hypothesis. ...
So, what we have is largely anecdote and our own observation..., but ... when it comes to observations, humans are good at seeing first derivatives (rates of change) and less good at seeing second derivatives (changes in rates of change). We see that iPads and self-scanners are replacing waitpersons and cashiers but it’s hard for us to tell whether “labor-saving technology” coming more quickly than it has in the past.
Of course, this time might really be different (some smart people say it is).
Or, as this article ... reminded me (h/t: KN), this time might not be very different at all. It’s about a new quinoa restaurant in San Francisco, called Eatsa, where you order and get your food without ever interacting with a person. ...
Now, where have I seen that before? Fifty years ago (!), I used to love to go to Manhattan automats, where ... a few coins would get you a sandwich, a veggie (not quinoa!), a slice of delicious pie, and so on. For the record, productivity growth was faster and unemployment was lower back then (though at 10, I don’t recall knowing these facts at the time).
All’s I’m saying is that tech change is always with us, and it’s really hard to tell by observation whether the pace with which it’s replacing workers is accelerating. And there are so many more moving parts to this. I’d bet a big difference between the economies in these two pictures is where the machines were manufactured. In other words, technology doesn’t historically kill labor demand. But it does move it around to different industries, occupations, and today, countries.
So before we conclude we’re all robot fodder, let’s see it in the productivity and investment data. ...

Thursday, August 20, 2015

'Shifting Visions of the ''Good Job'''

From Tim Taylor:

Shifting Visions of the "Good Job": As the unemployment rate has dropped to 5.5% and less in recent months, the arguments over jobs have shifted from the lack of available jobs to the qualities of the jobs that are available. It's interesting to me how our social ideas of what constitutes a "good job" have a tendency to shift over time. Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth illuminate some of these issues in "The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?" which appears in the Summer 2015 issue of the Journal of Economic Perspectives. All articles from JEP going back to the first issue in 1987 are freely available on-line compliments of the American Economic Association. (Full disclosure: I've worked as Managing Editor of the JEP since 1986.)

One theme that I found especially intriguing in the Mokyr, Vickers, and Ziebarth argument is how some of our social attitudes about what constitutes a "good job" have nearly gone full circle in the last couple of centuries. Back at the time of the Industrial Revolution in the late 18th and into the 19th century, it was common to hear arguments that the shift from farms, artisans, and home production into factories involved a reduction in the quality of work. But in recent decades, a shift away from factories and back toward decentralized production is sometimes viewed as a decline in the quality of work, too. Here are some examples:

For example, one concern from the time of the original Industrial Revolution was that factory work required scheduling their time in ways that removed flexibility. Mokyr, Vickers, and Ziebarth (citations omitted) note: "Workers who were “considerably dissatisfied, because they could not go in and out as they pleased” had to be habituated into the factory system, by means of fines, locked gates, and other penalties. The preindustrial domestic system, by contrast, allowed a much greater degree of flexibility."

Another type of flexibility in the time before the Industrial Revolution is that people often had the flexibility to combine their work life with their home life, and the separation of the two was thought be worrisome: "Part of the loss of control in moving to factory work involved the physical separation of home from place of work. While today people worry about the exact opposite phenomenon with the lines between spheres of home and work blurring, this disjunction was originally a cause of great anxiety, along with the separation of place-of-work from place-of-leisure. Preindustrial societies had “no clearly defined periods of leisure as such, but economic activities, like hunting or market-going, obviously have their recreational aspects, as do singing or telling stories at work.”

Of course, some common modern concerns about the quality of jobs is that many jobs lack regular hours. Many workers may face irregular hours, or no assurance of a minimum number of hours they can work. Moreover, many jobs now worry that work life is intruding back into home life, because we are hooked to our jobs by our computers and phones. ...
Another a fairly common theme of economists writing back in the 18th and 19th centuries ranging from Adam Smith to Karl Marx was that the new factor jobs treated people as if they were cogs in a machine. ...
Now, of course, there is widespread concern about a lack of factory jobs for low- and middle-skilled workers. Rather than worrying about these jobs being debasing or unfit for humans, we worry that there aren't enough of them.

I guess one reaction to this evolution of attitudes about "good jobs" is just to point out that workers and employers are both heterogenous groups. Some workers put a greater emphasis on flexibility of hours, while others might prefer regularity. Some workers prefer a straightforward job that they can leave behind at the end of the day; others prefer a job that is full of improvisation, learning on the fly, crises, and deadlines. To some extent, the labor market lets employers and workers match up as they desire. There's certainly no reason to assume that a "good job" should be a one-size-fits-all definition.

A second reaction is that there is clearly a kind of rosy-eyed nostalgia at work about the qualities of jobs of the past. Many of us tend to focus on a relatively small number of past jobs, not the jobs that most people did most of the time. In addition, we focus on a few characteristics of those jobs, not the way the jobs were actually experienced by workers of that time.

But yet another reaction is that the qualities of available jobs aren't just a matter of negotiation between workers and employers, and they aren't an historical inevitability. The qualities of the range of jobs in an economy are afffected by a range of institutions and factors like the human capital that workers bring to jobs, the extent of on-the-job training, how easy it is for someone with a series or employers or irregular hours to set up health insurance or a retirement account, rules about workplace safety, rules that impose costs on laying off or firing workers (which inevitably makes firms reluctant to hire more regular employees), the extent and type of union representation, rules about wages and overtime, and much more. I do worry that career-type jobs offering the possibility of longer-term connectedness between a worker and an employer seem harder to come by. In a career-type job, both the worker and employer place some value on the expected continuance of their relationship over time, and act and invest resources accordingly.

Friday, July 17, 2015

Paul Krugman: Liberals and Wages

We can do more to encourage firms to raise wages:

Liberals and Wages, by Paul Krugman, Commentary, NY Times: Hillary Clinton gave her first big economic speech on Monday, and progressives were by and large gratified. For Mrs. Clinton’s core message was that the federal government can and should use its influence to push for higher wages. ...
Mrs. Clinton’s speech reflected major changes, deeply grounded in evidence, in our understanding of what determines wages. And a key implication of that new understanding is that public policy can do a lot to help workers without bringing down the wrath of the invisible hand.
Many economists used to think of the labor market as being pretty much like the market for anything else, with the prices of different kinds of labor — that is, wage rates — fully determined by supply and demand. So if wages for many workers have stagnated or declined, it must be because demand for their services is falling.
In particular, the conventional wisdom attributed rising inequality to technological change, which was raising the demand for highly educated workers while devaluing blue-collar work. And there was nothing much policy could do to change the trend... But the case for “skill-biased technological change” as the main driver of wage stagnation has largely fallen apart. ...
Meanwhile, our understanding of wage determination has been transformed by an intellectual revolution...
The ... market for labor isn’t like the market for, say, wheat, because workers are people. And because they’re people, there are important benefits, even to the employer, from paying them more: better morale, lower turnover, increased productivity. These benefits largely offset the direct effect of higher labor costs, so that raising the minimum wage needn’t cost jobs after all.
The direct takeaway from this intellectual revolution is, of course, that we should raise minimum wages. But there are broader implications, too: Once you take what we’ve learned from minimum-wage studies seriously, you realize that they’re not relevant just to the lowest-paid workers.
For employers always face a trade-off between low-wage and higher-wage strategies — between, say, the traditional Walmart model of paying as little as possible and accepting high turnover and low morale, and the Costco model of higher pay and benefits leading to a more stable work force. And there’s every reason to believe that public policy can, in a variety of ways — including making it easier for workers to organize — encourage more firms to choose the good-wage strategy.
So there was a lot more behind Hillary’s speech than I suspect most commentators realized. ...

Tuesday, June 16, 2015

The Good and Bad Parts of Online Education

I have a new column:

The Good and Bad Parts of Online Education: Is online education the solution to widening inequality, rapidly rising costs, and lack of access to high quality courses? Will it lead to the demise of traditional “brick and mortar” institutions? I was initially very skeptical about the claims being made about online education, but after teaching several of these course during the past academic year my own assessment has become much more positive.
My main worry, as expressed in a previous column, was that the availability of online courses degrees would create a two-tiered education system and exaggerate inequality instead of reducing it. I still worry about that, but I didn’t give online education enough credit for the things that it can do. Here are some of the positives and negatives of online versus traditional education gleaned from my experience teaching both types of courses. ...

Monday, May 25, 2015

Paul Krugman: The Big Meh

Why hasn't the digital technological revolution had a bigger impact on productivity?:

The Big Meh, by Paul Krugman, Commentary, NY Times: ...Everyone knows that we live in an era of incredibly rapid technological change, which is changing everything. But what if what everyone knows is wrong? .... A growing number of economists ... are wondering if the technological revolution has been greatly overhyped... New technologies have yielded great headlines, but modest economic results. Why?
One possibility is that the numbers are missing the reality, especially the benefits of new products and services. I get a lot of pleasure from technology that lets me watch streamed performances by my favorite musicians, but that doesn’t get counted in G.D.P. Still, new technology is supposed to serve businesses as well as consumers, and should be boosting the production of traditional as well as new goods. The big productivity gains of ... 1995 to 2005 came largely in things like inventory control, and showed up as much or more in nontechnology businesses like retail as in high-technology industries themselves. Nothing like that is happening now.
Another possibility is that new technologies are more fun than fundamental. ...
So what do I think is going on...? The answer is that I don’t know — but neither does anyone else. Maybe my friends at Google are right, and Big Data will soon transform everything. Maybe 3-D printing will bring the information revolution into the material world. Or maybe we’re on track for another big meh.
What I’m pretty sure about, however, is that we ought to scale back the hype.
You see, writing and talking breathlessly about how technology changes everything might seem harmless, but, in practice, it acts as a distraction from more mundane issues — and an excuse for handling those issues badly. If you go back to the 1930s, you find many influential people saying the same kinds of things such people say nowadays: This isn’t really about the business cycle, never mind debates about macroeconomic policy; it’s about radical technological change and a work force that lacks the skills to deal with the new era.
And then, thanks to World War II, we finally got the demand boost we needed, and all those supposedly unqualified workers — not to mention Rosie the Riveter — turned out to be quite useful in the modern economy, if given a chance.
Of course, there I go, invoking history. Don’t I understand that everything is different now? Well, I understand why people like to say that. But that doesn’t make it true.

Tuesday, April 14, 2015

'Moore's Law at 50'

Tim Taylor:

Moore's Law at 50: So many important aspects of the US and world economy turn on developments in information and communications technology and their effects These technologies were driving productivity growth, but will they keep doing so? These technologies have been one factor creating the rising inequality of incomes, as many middle-managers and clerical workers found themselves displaced by information technology, while a number of high-end workers found that these technologies magnified their output. Many other technological changes--like the smartphone, medical imaging technologies, decoding the human gene, or various developments in nanotechnology--are only possible based on a high volume of cheap computing power. Information technology is part of what has made the financial sector larger, as the technologies have been used for managing (and mismanaging) risks and returns in ways barely dreamed of before. The trends toward globalization and outsourcing have gotten a large boost because information technology made it easier

In turn, the driving force behind information and communications technology has been Moore's law, which can understood as the proposition that the number of components packed on to a computer chip would double every two years, implying a sharp fall in the capabilities of information technology. But the capability of making transistors ever-smaller, at least with current technology, is beginning to run into physical limits. IEEE Spectrum has published a "Special Report: 50 Years of Moore's Law," with a selection of a dozen short articles looking back at Moore's original formulation of the law, how it has developed over time, and prospects for the law continuing. Here are some highlights.

It's very hard to get an intuitive sense of the exponential power of Moore's law, but Dan Hutcheson takes a shot at it with few well-chosen sentences and a figure. He writes:

In 2014, semiconductor production facilities made some 250 billion billion (250 x 1018) transistors. This was, literally, production on an astronomical scale. Every second of that year, on average, 8 trillion transistors were produced. That figure is about 25 times the number of stars in the Milky Way and some 75 times the number of galaxies in the known universe. The rate of growth has also been extraordinary. More transistors were made in 2014 than in all the years prior to 2011.

 Here's a figure from Hutcheson showing the trends of semiconductor output and price over time. Notice that both axes are measured as logarithmic scales: that is, they rise by powers of 10. The price of a transistor was more than a dollar back in the 1950s, and now it's a billionth of a penny.

graph showing transistors by the numbers

As the engineering project of making the components on a computer chip smaller and smaller is beginning to get near some physical limits. What might happen next?

Chris Mack makes the case that Moore's law is is not a fact of nature; instead, it's the result of competition among chip-makers, who viewed it as the baseline for their technological progress, and thus set their budgets for R&D and investment according to keeping up this pace. He argues that as technological constraints begin to bind, the next step will be for combining capabilities on a chip. ...

Andrew Huang makes the intriguing claim that a slowdown in Moore's law might be useful for other sources of productivity growth. He argues that when the power of information technology is increasing so quickly, there is an understandably heavy focus on adapting to these rapid gains. But if gains in raw information processing slow down, there would be room for more focus on making the devices that use information technology cheaper to produce, easier to use, and cost-effective in many ways.

Jonathan Koomey and Samuel Naffziger point out that computing power has become so cheap that we often aren't using what we've got--which suggests the possibility of efficiency gains in energy use and computer utilization...

Final note: I've written about Moore's law a couple of times previously this blog, including "Checkerboard Puzzle, Moore's Law, and Growth Prospects" (February 4, 2013) and   "Moore's Law: At Least a Little While Longer" (February 18, 2014).  These posts tend to emphasize that Moore's law may still be good for a few more doublings. But at that point, the course of technological progress in information technology, for better or worse, will take some new turns.

Wednesday, March 18, 2015

'Estimating the Impact of Robots on Productivity and Employment'

Another article about robots:

Estimating the impact of robots on productivity and employment, by Guy Michaels and Georg Graetz, Vox EU: Robots' capacity for autonomous movement and their ability to perform an expanding set of tasks have captured writers' imaginations for almost a century. Recently robots have emerged from the pages of science fiction novels into the real world, and discussions of their possible economic effects have become ubiquitous (see e.g. The Economist 2014, Brynjolfsson and McAfee 2014). But a serious problem inhibits these discussions – there has so far been no systematic empirical analysis of the effects that robots are already having.
In recent work we begin to remedy this problem (Graetz and Michaels 2015). We compile a new dataset spanning 14 industries (mainly manufacturing industries, but also agriculture and utilities) in 17 developed countries (including European countries, Australia, South Korea, and the US). Uniquely, our dataset includes a measure of the use of industrial robots employed in each industry, in each of these countries, and how it has changed from 1993-2007. We obtain information on other economic performance indicators from the EUKLEMS database (Timmer et al. 2007).
We find that industrial robots increase labor productivity, total factor productivity, and wages. At the same time, while industrial robots had no significant effect on total hours worked, there is some evidence that they reduced the employment of low skilled workers, and to a lesser extent also middle skilled workers. ...

They conclude:

Our findings on the aggregate impact of robots are interesting given recent concerns in the macroeconomic literature that productivity gains from technology in general may have slowed down. Gordon (2012, 2014) expresses a particularly pessimistic view, and there are broader worries about secular macroeconomic stagnation (Summers 2014, Krugman 2014), although others remain more optimistic (Brynjolfsson and McAfee 2014). We expect that the beneficial effects of robots will extend into the future as new robot capabilities are developed, and service robots come of age. Our findings do come with a note of caution: there is some evidence of diminishing marginal returns to robot use, or congestion effects, so robots are not a panacea for growth.
Although we do not find evidence of a negative impact of robots on aggregate employment, we see a more nuanced picture when we break down employment (and the wage bill) by skill groups. Robots appear to reduce the hours and the wage bill shares of low-skilled workers, and to a lesser extent also of middle skilled workers. They have no significant effect on the employment of high-skilled workers. This pattern differs from the effect that recent work has found for ICT, which seems to benefit high-skilled workers at the expense of middle-skilled workers (Autor 2014, Michaels et al. 2014).
In further results, we find that industrial robots increased total factor productivity and wages. At the same time, we find no significant effect of these robots on the labor share.
In summary, we find that industrial robots made significant contributions to labor productivity and aggregate growth, and also increased wages and total factor productivity.  While fears that robots destroy jobs at a large scale have not materialized, we find some evidence that robots reduced low- and middle-skilled workers’ employment.

Monday, March 09, 2015

'Publicly Funded Inequality'

Kemal Derviş

Publicly funded inequality, Brookings: One of the factors driving the massive rise in global inequality and the concentration of wealth at the very top of the income distribution is the interplay between innovation and global markets. In the hands of a capable entrepreneur, a technological breakthrough can be worth billions of dollars, owing to regulatory protections and the winner-take-all nature of global markets. What is often overlooked, however, is the role that public money plays in creating this modern concentration of private wealth.
As the development economist Dani Rodrik recently pointed out, much of the basic investment in new technologies in the United States has been financed with public funds. The funding can be direct, through institutions like the Defense Department or the National Institutes of Health (NIH), or indirect, via tax breaks, procurement practices, and subsidies to academic labs or research centers.
When a research avenue hits a dead end – as many inevitably do – the public sector bears the cost. For those that yield fruit, however, the situation is often very different. Once a new technology is established, private entrepreneurs, with the help of venture capital, adapt it to global market demand, build temporary or long-term monopoly positions, and thereby capture large profits. The government, which bore the burden of a large part of its development, sees little or no return. ...
A combination of measures and international agreements must be found that would allow taxpayers to obtain decent returns on their investments, without removing the incentives for savvy entrepreneurs to commercialize innovative products.
The seriousness of this problem should not be understated. The amounts involved contribute to the creation of a new aristocracy that can pass on its wealth through inheritance. If huge sums can be spent to protect privilege by financing election campaigns (as is now the case in the US), the implications of this problem, for both democracy and long-term economic efficiency, could become systemic. The possible solutions are far from simple, but they are well worth seeking.

["Several ways to change such a system" are also discussed.]

Wednesday, February 25, 2015

'Robots Aren’t About to Take Your Job'

Timothy Aeppel at the WSJ:

Be Calm, Robots Aren’t About to Take Your Job, MIT Economist Says: David Autor knows a lot about robots. He doesn’t think they’re set to devour our jobs. ... His is “the non-alarmist view”...
Mr. Autor’s latest paper, presented to a packed audience at this year’s meeting of central bankers at Jackson Hole, Wyo., emphasized how difficult it is to program machines to do many tasks that humans find often easy and intuitive. In it, he played off a paradox identified in the 1960s by philosopher Michael Polanyi, who noted that humans can do many things without being able to explain how, like identify the face of a person in a series of photographs as they age. Machines can’t do that, at least not with accuracy.
This is why big breakthroughs in automation will take longer than many predict, Mr. Autor told the bankers. If a person can’t explain how they do something, a computer can’t be programmed to mimic that ability. ...
To Mr. Autor, polarization of the job market is the real downside of automation. He calculates middle-skill occupations made up 60% of all jobs in 1979. By 2012, this fell to 46%. The same pattern is visible in 16 European Union economies he studied.
The upshot is more workers clustered at the extremes. At the same time, average wages have stagnated for more than a decade. He attributes this to the loss of all those relatively good-paying middle-range jobs, as well as downward pressure on lower-skilled wages as displaced workers compete for the lesser work. ...

I've been arguing for a long time that in coming decades the major question will be about distribution, not production. I'm not very worried about stagnation, etc. -- we'll have plenty of stuff to go around. I'm worried about, to quote the title of a political science textbook I used many, many, many years ago as an undergraduate, "who gets the cookies?" not how many cookies we're able to produce So I agree with Autor on this point:

Mr. Autor ... added, “If we automate all the jobs, we’ll be rich—which means we’ll have a distribution problem, not an income problem.”

Monday, January 26, 2015


Dietz Vollrath:

Techno-neutrality: I’ve had a few posts in the past few months (here and here) about the consequences of mechanization for the future of work. In short, what will we do when the robots take our jobs?
I wouldn’t call myself a techno-optimist. I don’t think the arrival of robots necessarily makes everything better. But I do not buy the strong techno-pessimism that comes up in many places. Richard Serlin has been a frequent commenter on this blog, and he generally has a gloomy take on where we are going to end up once the robots arrive. I’m not bringing up Richard to pick on him. He writes thoughtful comments on this subject (and lots of others), and it is those comments that pushed me to try and be more clear on why I’m “techno-neutral”. ...

Wednesday, January 21, 2015

'Rising Fears About Losing and Replacing Jobs'

Tim Taylor:

Rising Fears About Losing and Replacing Jobs: The General Social Survey is a nationally representative survey carried bout by the National Opinion Research Center at the University of Chicago and financially supported by grants from the National Science Foundation. Starting in 1977 and 1978, and intermittently over the years since then, it has included these two questions:

Thinking about the next 12 months, how likely do you think it is that you will lose your job or be laid off—very likely, fairly likely, not too likely, or not at all likely?

About how easy would it be for you to find a job with another employer with approximately the same income and fringe benefits you have now? Would you say it would be very easy, somewhat easy, or not easy at all?

Back in 1980, Charles Weaver wrote an article about the patterns of the answers in the first wave of this data. He updates the results and looks for patterns over time in "Worker’s expectations about losing and replacing their jobs: 35 years of change," in the January 2015 issue of the Monthly Labor Review, published by the US Bureau of Labor Statistics. ...

Both simple comparisons and more sophisticated analyses suggests that fear about losing and replacing jobs has been rising over time. Here's the simple comparison from Weaver: "Compared with workers in 1977 and 1978, workers in 2010 and 2012 expressed significantly less job security. They were more afraid of losing their jobs (11.2 percent versus the earlier 7.7 percent) and were less likely to think that they could find comparable work without much difficulty (48.3 percent versus the earlier 59.2 percent)."

The more detailed breakdown of the data shows which groups have seen their labor market fears increase the most. On the question how likely you are to lose your current job, the answer for the population as a whole rose 3.5 percentage points from 1977-78 to 2010-12. But for blue-collar craft workers the increase was 11.1 percentage points, and for blue collar operatives the rise was 9.7 percentage points. Also, from the early to the most recent survey, those in the age 50-59 age bracket were 8.2 percentage points more likely to think that they were likely to lose their job.

On the issue of whether workers expected to be able to find a comparable job, the answer for the population as a whole dropped 10.9 percentage points from 1977-78 to 2010-12. For those with "some college," but not a college degree, the expectation fell by 23.1 percentage points, and for white collar workers in clerical jobs it fell by 23.9 percentage points. Interestingly, for workers 60 and over the confidence in being able to fine a comparable job was actually 1.7 percentage point greater in the 2010-12 results than in the 1977-78 results.

An obvious question is whether the greater fears about losing jobs and replacing jobs are a relatively recent development--in particular, whether they happened only in the aftermath of the Great Recession--or whether this has been a steady trend over time. Stewart runs through a number of different statistical exercises to consider this point...

Stewart writes: "In 2010 and 2012, more workers feared losing their jobs, and far fewer workers said that it would be easy to find a comparable job, than in 1977 and 1978. ... Some may infer that the lower job security felt by Americans in 2010 and 2012 was an aberration, based upon the unusual conditions presented by the recent recession. But the reality is that the downward trend in feelings of job security has been going on for the last 35 years, apart from the “extra push” it has received from the “`Great Recession,' ..."

As I mentioned in yesterday's blog post, I think the most powerful fear in the current labor market is not about mass unemployment, but instead is a concern that the available alternative jobs may be of lower quality in terms of wages, benefits, work conditions, job security, and the prospect for a future career path.

Tuesday, November 25, 2014

'Is Uber Really in a Fight to the Death?'

For those of you interested in Uber, this is from Joshua Gans:

Is Uber really in a fight to the death?: In recent days, since their PR troubles, there has been much discussion as to why Uber seems to be so aggressive. Reasons ranged from being inept, to the challenges of fighting politics against taxi regulations to a claim that Uber’s market has a ‘winner take all’ nature. It is this last one that is of particular interest because it suggests that Uber has to fight hard against competitors like Lyft or it will lose. It also suggests that Uber’s $20 billion odd valuation is based on beliefs that it will win, and win big.
I am not sure that this is really the case. Despite the name ‘Uber’ connoting, ‘one Uber to rule them all,’ the theory underlying the notion of winner take all is rather special and is far from being proven in cases like this. ...

Wednesday, October 29, 2014

'Digital Divide Exacerbates US Inequality'

The digital divide:

Digital divide exacerbates US inequality, by David Crow, FT: The majority of families in some of the US’s poorest cities do not have a broadband connection, according to a Financial Times analysis of official data that shows how the “digital divide” is exacerbating inequality in the world’s biggest economy. ...
The OECD ranks the US 30th out of 33 countries for affordability...
There is a very strong correlation with race and income. Just 45 per cent of households with an income of less than $20,000 a year have broadband whereas the rate for those earning $75,000 or more is 91 per cent. About a third of African American and Hispanic households are unconnected compared to 20 per cent for white households and 10 per cent for Asian households.

Sunday, September 28, 2014

'The rise of China and the Future of US Manufacturing'

Acemoglu, Autor, Dor, Hansen, and Price (I've noted this paper once or twice already in recent months, but thought it worthwhile to post their summary of te work):

The rise of China and the future of US manufacturing, by Daron Acemoglu, David Autor, David Dorn, Gordon H. Hanson, and Brendan Price, Vox EU: The end of the Great Recession has rekindled optimism about the future of US manufacturing. In the second quarter of 2010 the number of US workers employed in manufacturing registered positive growth – its first increase since 2006 – and subsequently recorded ten consecutive quarters of job gains, the longest expansion since the 1970s. Advocating for the potential of an industrial turnaround, some economists give a positive spin to US manufacturing’s earlier troubles: while employment may have fallen in the 2000s, value added in the sector has been growing as fast as the overall US economy. Its share of US GDP has kept stable, an achievement matched by few other high-income economies over the same period (Lawrence and Edwards 2013, Moran and Oldenski 2014). The business press has giddily coined the term ‘reshoring’ to describe the phenomenon – as yet not well documented empirically – of companies returning jobs to the United States that they had previously offshored to low-wage destinations. 
Before we declare a renaissance for US manufacturing, it is worth re-examining the magnitude of the sector’s previous decline and considering the causal factors responsible for job loss. The scale of the employment decline is indeed stunning. Figure 1 shows that in 2000, 17.3 million US workers were employed in manufacturing, a level that with periodic ups and downs had changed only modestly since the early 1980s. By 2010, employment had dropped to 11.5 million workers, a 33% decrease from 2000. Strikingly, most of this decline came before the onset of the Great Recession. In the middle of 2007, on the eve of the Lehman Brothers collapse that paralysed global financial markets, US manufacturing employment had already dipped to 13.9 million workers, such that three-fifths of the job losses over the 2000 to 2010 period occurred prior to the US aggregate contraction. Figure 1 also reveals the paltriness of the recent manufacturing recovery. As of mid-2014, the number of manufacturing jobs had reached only 12.1 million, a level far below the already diminished pre-recession level.

Figure 1. US employment , 1980q1-2014q3

Source: US Bureau of Labor Statistics.

We examine the reasons behind the recent decline in US manufacturing employment (Acemoglu et al. 2014). Our point of departure is the coincidence of the 2000s swoon in US manufacturing and a significant increase in import competition from China (Bernard et al. 2006). Between 1990 and 2011 the share of global manufacturing exports originating in China surged from two to 16% (Hanson 2012). This widely heralded export boom was the outcome of deep economic reforms that China enacted in the 1980s and 1990s, which were further extended by the country’s joining the World Trade Organization in 2001 (Brandt et al. 2012, Pierce and Schott 2013). China’s share in US manufacturing imports has expanded in concert with its global presence, rising from 5% in 1991 to 11% in 2001 before leaping to 23% in 2011. Could China’s rise be behind US manufacturing’s fall?
The first step in our analysis is to estimate the direct impact of import competition from China on US manufacturing industries. Suppose that the economic opening in China allows the country to realise a comparative advantage in manufacturing that had lain dormant during the era of Maoist central planning, which entailed near prohibitive barriers to trade. As reform induces China to reallocate labour and capital from farms to factories and from inefficient state-owned enterprises to more efficient private businesses, output will expand in the sectors in which the country’s comparative advantage is strongest. China’s abundant labour supply and relatively scarce supply of arable land and natural resources make manufacturing the primary beneficiary of reform-induced industrial restructuring. The global implications of China’s reorientation toward manufacturing – strongly abetted by inflows of foreign direct investment – are immense. China accounts for three-quarters of all growth in manufacturing value added that has occurred in low and middle income economies since 1990.
For many US manufacturing firms, intensifying import competition from China means a reduction in demand for the goods they produce and a corresponding contraction in the number of workers they employ. Looking across US manufacturing industries whose outputs compete with Chinese import goods, we estimate that had import penetration from China not grown after 1999, there would have been 560,000 fewer manufacturing jobs lost through 2011. Actual US manufacturing employment declined by 5.8 million workers from 1999 to 2011, making the counterfactual job loss from the direct effect of greater Chinese import penetration amount to 10% of the realised job decline in manufacturing.
These direct effects of trade exposure do not capture the full impact of growing Chinese imports on US employment. Negative shocks to one industry are transmitted to other industries via economic linkages between sectors. One source of linkages is buyer-supplier relationships (Acemoglu et al. 2012). Rising import competition in apparel and furniture – two sectors in which China is strong – will cause these ‘downstream’ industries to reduce purchases from the ‘upstream’ sectors that supply them with fabric, lumber, and textile and woodworking machinery. Because buyers and suppliers often locate near one another, much of the impact of increased trade exposure in downstream industries is likely to transmit to suppliers in the same regional or national market. We use US input-output data to construct downstream trade shocks for both manufacturing and non-manufacturing industries. Estimates from this exercise indicate sizeable negative downstream effects. Applying the direct plus input-output measure of exposure increases our estimates of trade-induced job losses for 1999 to 2011 to 985,000 workers in manufacturing and to two million workers in the entire economy. Inter-industry linkages thus magnify the employment effects of trade shocks, almost doubling the size of the impact within manufacturing and producing an equally large employment effect outside of manufacturing.
Two additional sources of linkages between sectors operate through changes in aggregate demand and the reallocation of labour. When manufacturing contracts, workers who have lost their jobs or suffered declines in their earnings subsequently reduce their spending on goods and services. The contraction in demand is multiplied throughout the economy via standard Keynesian mechanisms, depressing aggregate consumption and investment. Helping offset these negative aggregate demand effects, workers who exit manufacturing may take up jobs in the service sector or elsewhere in the economy, replacing some of the earnings lost in trade-exposed industries. Because aggregate demand and reallocation effects work in opposing directions, we can only detect their net impact on total employment. A further complication is that these impacts operate at the level of the aggregate economy – as opposed to direct and input-output effects of trade shocks which operate at the industry level – meaning we have only as many data points to detect their presence as we have years since the China trade shock commenced. Since China’s export surge did not hit with full force until the early 1990s, the available time series for the national US economy is disconcertingly short.
To address this data challenge, we supplement our analysis of US industries with an analysis of US regional economies. We define regions to be ‘commuting zones’ which are aggregates of commercially linked counties that comprise well-defined local labour markets. Because commuting zones differ sharply in their patterns of industrial specialisation, they are differentially exposed to increased import competition from China (Autor et al. 2013). Asheville, North Carolina, is a furniture-making hub, putting it in the direct path of the China maelstrom. In contrast, Orlando, Florida (of Disney and Harry Potter World Fame), focuses on tourism, leaving it lightly affected by rising imports of manufactured goods. If the reallocation mechanism is operative, then when a local industry contracts as a result of Chinese competition, some other industry in the same commuting zone should expand. Aggregate demand effects should also operate within local labour markets, as shown by Mian and Sufi (2014) in the context of the recent US housing bust. If increased trade exposure lowers aggregate employment in a location, reduced earnings will decrease spending on non-traded local goods and services, magnifying the impact throughout the local economy.
Our estimates of the net impact of aggregate demand and reallocation effects imply that import growth from China between 1999 and 2011 led to an employment reduction of 2.4 million workers. This figure is larger than the 2.0 million job loss estimate we obtain for national industries, which only captures direct and input-output effects. But it still likely understates the full consequences of the China shock on US employment. Neither our analysis for commuting zones nor for national industries fully incorporates all of the adjustment channels encompassed by the other. The national-industry estimates exclude reallocation and aggregate demand effects, whereas the commuting-zone estimates exclude the national component of these two effects, as well as the non-local component of input-output linkage effects. Because the commuting zone estimates suggest that aggregate forces magnify rather than offset the effects of import competition, we view our industry-level estimates of employment reduction as providing a conservative lower bound.
What do our findings imply about the potential for a US manufacturing resurgence? The recent growth in manufacturing imports to the US is largely a consequence of China’s emergence on the global stage coupled with its deep comparative advantage in labour-intensive goods. The jobs in apparel, furniture, shoes, and other wage-sensitive products that the United States has lost to China are unlikely to return. Even as China’s labour costs rise, the factories that produce these goods are more likely to relocate to Bangladesh, Vietnam, or other countries rising in China’s wake than to reappear on US shores. Further, China’s impact on US manufacturing is far from complete. During the 2000s, the country rapidly expanded into the assembly of laptops and cell-phones, with production occurring increasingly under Chinese brands, such as Lenovo and Huawei. Despite this rather bleak panorama, there are sources of hope for manufacturing in the United States. Perhaps the most encouraging sign is that the response of many companies to increased trade pressure has been to increase investment in innovation (Bloom et al. 2011). The ensuing advance in technology may ultimately help create new markets for US producers. However, if the trend toward the automation of routine jobs in manufacturing continues (Autor and Dorn 2013), the application of these new technologies is likely to do much more to boost growth in value added than to expand employment on the factory floor.
Acemoglu D, V Carvalho, A Ozdaglar, and A Tahbaz-Salehi (2012), “The Network Origins of Aggregate Fluctuations.” Econometrica, 80(5): 1977-2016.
Acemoglu D, D H Autor, D Dorn, G H Hanson, and B Price (2014), “Import Competition and the Great US Employment Sag of the 2000s.” NBER Working Paper No. 20395.
Autor, D H and D Dorn (2013), “The Growth of Low Skill Service Jobs and the Polarization of the US Labor Market.” American Economic Review, 103(5), 1553-1597.
Autor D H, D Dorn, and G H Hanson (2013a) “The China Syndrome: Local Labor Market Effects of Import Competition in the United States.” American Economic Review, 103(6): 2121-2168.
Bernard A B, J B Jensen, and P K Schott (2006), “Survival of the Best Fit: Exposure to Low-Wage Countries and the (Uneven) Growth of US Manufacturing Plants.” Journal of International Economics, 68(1), 219-237.
Bloom N, M Draca, and J Van Reenen (2012), “Trade Induced Technical Change? The Impact of Chinese Imports on Innovation, IT, and Productivity.” Mimeo, Stanford University.
Brandt L, J Van Biesebroeck, and Y Zhang (2012), “Creative Accounting or Creative Destruction? Firm-Level Productivity Growth in Chinese Manufacturing.” Journal of Development Economics, 97(2): 339-351.
Hanson, G (2012), “The Rise of Middle Kingdoms: Emerging Economies in Global Trade.” Journal of Economic Perspectives, 26(2): 41-64.
Mian, A and A Sufi (2014), “What Explains the 2007-2009 Drop in Employment?” Econometrica, forthcoming.
Pierce, J R and P K Schott (2013), “The Surprisingly Swift Decline of US Manufacturing Employment.” Yale Department of Economics Working Paper, November.

Tuesday, July 22, 2014

'Will Automation Take Our Jobs?'

Running late today -- two very quick ones. First, from Scientific American:

Will Automation Take Our Jobs?: Last fall economist Carl Benedikt Frey and information engineer Michael A. Osborne, both at the University of Oxford, published a study estimating the probability that 702 occupations would soon be computerized out of existence. Their findings were startling. Advances in ... technologies could, they argued, put 47 percent of American jobs at high risk of being automated in the years ahead. Loan officers, tax preparers, cashiers, locomotive engineers, paralegals, roofers, taxi drivers and even animal breeders are all in danger of going the way of the switchboard operator.
Whether or not you buy Frey and Osborne's analysis, it is undeniable that something strange is happening in the U.S. labor market. Since the end of the Great Recession, job creation has not kept up with population growth. Corporate profits have doubled since 2000, yet median household income (adjusted for inflation) dropped from $55,986 to $51,017. ... Erik Brynjolfsson and Andrew McAfee ... call this divergence the “great decoupling.” In their view, presented in their recent book The Second Machine Age, it is a historic shift. ...

Tim Taylor:

The Next Wave of Technology?, by Tim Taylor: Many discussions of "technology" and how it will affect jobs and the economy have a tendency to discuss technology as if it is one-dimensional, which is of course an extreme oversimplification. Erik Brynjolfsson, Andrew McAfee, and Michael Spence offer some informed speculation on how they see the course of technology evolving in "New World Order: Labor, Capital, and Ideas in the Power Law Economy," which appears in the July/August 2014 issue of Foreign Affairs (available free, although you may need to register).

Up until now, they argue, the main force of information and communications technology has been to tie the global economy together, so that production could be moved to where it was most cost-effective. ...
But looking ahead, they argue that the next wave of technology will not be about relocating production around the globe, but changing the nature of production--and in particular, automating more and more of it. If the previous wave of technology made workers in high-income countries like the U.S. feel that their jobs were being outsourced to China, the next wave is going to make those low-skill workers in repetitive jobs--whether in China or anywhere else--feel that their jobs are being outsources to robots. ...
If this prediction holds true, what does this mean for the future of jobs and the economy?

1) Outsourcing would become much less common. ...

2) For low-income and middle-income countries like China..., their jobs and workforce would experience a dislocating wave of change.

3) Some kinds of physical capital are going to plummet in price, like robots, 3D printing, and artificial intelligence...

4)  So..., who does well in this future economy? For high-income countries like the United States, Brynjolfsson, McAfee, and Spence emphasize that the greatest rewards will go to "people who create new ideas and innovations," in what they refer to as a wave of "superstar-based technical change." ...

This final forecast seems overly grim to me. While I can easily believe that the new waves of technology will continue to create superstar earners, it seems plausible to me that the spread and prevalence of many different new kinds of technology offers opportunities to the typical worker, too. After all, new ideas and innovations, and the process of bringing them to the market, are often the result of a team process--and even being a mid-level but contributing player on such teams, or a key supplier to such teams, can be well-rewarded in the market. More broadly, the question for the workplace of the future is to think about jobs where labor can be a powerful complement to new technologies, and then for the education and training system, employers, and employees to get the skills they need for such jobs. If you would like a little more speculation, one of my early posts on this blog, back on July 25, 2011, was a discussion of "Where Will America's Future Jobs Come From?"

Saturday, April 05, 2014

'Automation Alone Isn’t Killing Jobs'

Tyler Cowen:

Automation Alone Isn’t Killing Jobs, by Tyler Cowen, Commentary, NY Times: Although the labor market report on Friday showed modest job growth, employment opportunities remain stubbornly low in the United States, giving new prominence to the old notion that automation throws people out of work.
Back in the 19th century, steam power and machinery took away many traditional jobs, though they also created new ones. This time around, computers, smart software and robots are seen as the culprits. They seem to be replacing many of the remaining manufacturing jobs and encroaching on service-sector jobs, too.
Driverless vehicles and drone aircraft are no longer science fiction, and over time, they may eliminate millions of transportation jobs. Many other examples of automatable jobs are discussed in “The Second Machine Age,” a book by Erik Brynjolfsson and Andrew McAfee, and in my own book, “Average Is Over.” The upshot is that machines are often filling in for our smarts, not just for our brawn — and this trend is likely to grow.
How afraid should workers be of these new technologies? There is reason to be skeptical of the assumption that machines will leave humanity without jobs. ...

See also, Dean Baker "If Technology Has Increased Unemployment Among the Less Educated, Someone Forgot to Tell the Data."

Wednesday, April 02, 2014

'Inequality is Caused by Ideology, not Technology'

John Quiggin:

Inequality is caused by ideology, not technology, by  John Quiggin: I’ve just had an article published at New Left Project, under the title Don’t Blame the Internet for Rising Inequality. Much of it will be familiar, but I want to stress a particular, and I think novel, critique of the idea that skill-intensive technology is responsible for rising inequality

...The real gains over this period have gone to a subset of the top 1 per cent, dominated by CEOs, other senior managers and finance industry operators. This group has nearly quadrupled its real income over the past 30 years...

This is a major problem for the Race Against the Machine hypothesis. Much of the growth in income share of the top 1 per cent occurred before 2000, when the stereotypical CEO was a technological illiterate who had his (sic) secretary print out his emails. Even today, the technology available to the typical senior manager—a PC with access to the Internet, and a corporate intranet with very limited capabilities—is no different to that of the average knowledge worker, and inferior to that of workers in tech-intensive specialties.

Nor does the ownership of capital explain much here. Even for tech-intensive jobs, the capital and telecomm requirements for an individual worker cost no more than $10,000 for a top-of-the-line computer setup (amortized over 3-5 years), and perhaps $1000 a year for a broadband internet connection. This is well within the capacity of self-employed professional workers to pay for themselves, and in fact many professionals have better equipment at home than at work. Advances in information and communications technology thus can’t explain the vast majority of the growth in inequality over the past three decades.


Tuesday, March 04, 2014

'Will MOOCs Lead to the Democratisation of education?'

Some theoretical results on MOOCs:

Will MOOCs lead to the democratisation of education?, by Joshua Gans: With all the recent discussion of how hard it is for journalists to read academic articles, I thought I’d provide a little service here and ‘translate’ the recent NBER working paper by Daron Acemoglu, David Laibson and John List, “Equalizing Superstars” for a general audience. The paper contains a ‘light’ general equilibrium model that may be difficult for some to parse.
The paper is interested in what the effect of MOOCs or, in general, web-based teaching options would be on educational outcomes around the world, the distribution of those outcomes and the wages of teachers. ...

Thursday, February 20, 2014

'Moore's Law: At Least a Little Longer'

Tim Taylor:

Moore's Law: At Least a Little Longer: One can argue that the primary driver of U.S. and even world economic growth in the last quarter-century is Moore's law--that is, the claim first advanced back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip would double every two years. But can it go on? Harald Bauer, Jan Veira, and Florian Weig of the McKinsey Global Institute consider the issues in "Moore’s law: Repeal or renewal?" a December 2013 paper. ...
The authors argue that technological advances already in the works are likely to sustain Moore's law for another 5-10 years. This As I've written before, the power of doubling is difficult to appreciate at an intuitive level, but it means that the increase is as big as everything that came before. Intel is now etching transistors at 22 nanometers, and as the company points out, you could fit 6,000 of these transistors across the width of a human hair; or if you prefer, it would take 6 million of these 22 nanometer transistors to cover the period at the end of a sentence. Also, a 22 nanometer transistor can switch on and off 100 billion times in a second. 
The McKinsey analysts point out that while it is technologically possible for Moore's law to continue, the economic costs of further advances are becoming very high. They write: "A McKinsey analysis shows that moving from 32nm to 22nm nodes on 300-millimeter (mm) wafers causes typical fabrication costs to grow by roughly 40 percent. It also boosts the costs associated with process development by about 45 percent and with chip design by up to 50 percent. These dramatic increases will lead to process-development costs that exceed $1 billion for nodes below 20nm. In addition, the state-of-the art fabs needed to produce them will likely cost $10 billion or more. As a result, the number of companies capable of financing next-generation nodes and fabs will likely dwindle."
Of course, it's also possible to have performance improvements and cost decreases on chips already in production: for example, the cutting edge of computer chips today will probably look like a steady old cheap workhorse of a chip in about five years. I suspect that we are still near the beginning, and certainly not yet at the middle, of finding ways for information and communications technology to alter our work and personal lives. But the physical problems and  higher costs of making silicon-based transistors at an ever-smaller scale won't be denied forever, either.

Monday, February 17, 2014

Paul Krugman: Barons of Broadband

We should be more worried than we are about monopoly power:

Barons of Broadband , by Paul Krugman, Commentary, NY Times: Last week’s big business news was the announcement that Comcast ... has reached a deal to acquire Time Warner... If regulators approve the deal, Comcast will be an overwhelmingly dominant player in the business...
So let me ask two questions about the proposed deal. First, why would we even think about letting it go through? Second, when and why did we stop worrying about monopoly power?
On the first question, broadband Internet and cable TV are already highly concentrated industries... Comcast perfectly fits the old notion of monopolists as robber barons...
And there are good reasons to believe ... that monopoly power has become a significant drag on the U.S. economy as a whole.
There used to be a bipartisan consensus in favor of tough antitrust enforcement. During the Reagan years, however, antitrust policy went into eclipse, and ever since measures of monopoly power... have been rising fast.
At first, arguments against policing monopoly power pointed to the alleged benefits of mergers in terms of economic efficiency. Later, it became common to assert that the world had changed in ways that made all those old-fashioned concerns about monopoly irrelevant. Aren’t we living in an era of global competition? Doesn’t ... creative destruction ... constantly tear down old industry giants and create new ones?
The truth, however, is that many goods and especially services aren’t subject to international competition: New Jersey families can’t subscribe to Korean broadband. Meanwhile, creative destruction has been oversold: Microsoft may be ... in decline, but it’s still enormously profitable thanks to the monopoly position it established decades ago.
Moreover, there’s good reason to believe that monopoly is itself a barrier to innovation...: why upgrade your network or provide better services when your customers have nowhere to go?
And the same phenomenon may be ... holding back the economy as a whole. One puzzle ... has been the disconnect between profits and investment. Profits are at a record high..., yet corporations aren’t reinvesting their returns in their businesses. Instead, they’re buying back shares, or accumulating huge piles of cash. This is exactly what you’d expect to see if a lot of those record profits represent monopoly rents.
It’s time, in other words, to go back to worrying about monopoly power, which we should have been doing all along. And the first step on the road back from our grand detour on this issue is obvious: Say no to Comcast.

Tuesday, February 11, 2014

'Enslave the Robots and Free the Poor'

Martin Wolf on the "rise of intelligent machines":

Enslave the robots and free the poor, by Martin Wolf, Commentary, FT: ...we must reconsider leisure. For a long time the wealthiest lived a life of leisure at the expense of the toiling masses. The rise of intelligent machines makes it possible for many more people to live such lives without exploiting others. Today’s triumphant puritanism finds such idleness abhorrent. Well, then, let people enjoy themselves busily. What else is the true goal of the vast increases in prosperity we have created?
...we will need to redistribute income and wealth. ... The revenue could come from taxes on bads (pollution, for example) or on rents (including land and, above all, intellectual property). Property rights are a social creation. The idea that a small minority should overwhelming benefit from new technologies should be reconsidered. ...

Saturday, February 08, 2014

Job Polarization and Middle-Class Workers’ Wages

The decline of the middle class:

Job polarization and the decline of middle-class workers’ wages, by Michael Boehm, Vox EU: The decline of the middle class has come to the forefront of debate in the US and Europe in recent years. This decline has two important components in the labour market. First, the number of well-paid middle-skill jobs in manufacturing and clerical occupations has decreased substantially since the mid-1980s. Second, the relative earnings for workers around the median of the wage distribution dropped over the same period, leaving them with hardly any real wage gains in nearly 30 years.
Job polarization and its cause
Pioneering research by Autor, Katz, and Kearney (2006), Goos and Manning (2007), and Goos, Manning, and Salomons (2009) found that the share of employment in occupations in the middle of the skill distribution has declined rapidly in the US and Europe. At the same time the share of employment at the upper and lower ends of the occupational skill distribution has increased substantially. Goos and Manning termed this phenomenon “job polarization” and it is depicted for US workers in Figure 1.

Figure 1. Changes in US employment shares by occupations since the end of the 1980s


Notes: The chart depicts the percentage point change in employment in the low-, middle- and high-skilled occupations in the National Longitudinal Survey of Youth (NLSY) and the comparable years and age group in the more standard Current Population Survey (CPS). The high-skill occupations comprise managerial, professional services and technical occupations. The middle-skill occupations comprise sales, office/administrative, production, and operator and laborer occupations. The low-skill occupations include protective, food, cleaning and personal service occupations.
In an influential paper, Autor, Levy, and Murnane (2003) provide a compelling explanation: they found that middle-skilled manufacturing and clerical occupations are characterized by a high intensity of procedural, rule-based activities which they call “routine tasks”. As it happens, these routine tasks can relatively easily be coded into computer programs.
Therefore, the rapid improvements in computer technology over the last few decades have provided employers with ever cheaper machines that can replace humans in many middle-skilled activities such as bookkeeping, clerical work and repetitive production tasks. These improvements in technology also enable employers to offshore some of the routine tasks that cannot be directly replaced by machines (Autor 2010).
Moreover, cheaper routine tasks provided by machines complement the non-routine abstract tasks that are intensively carried out in high-skill occupations. For example, data processing computer programs strongly increased the productivity of highly-skilled professionals. Machines also do not seem to substitute for the non-routine manual tasks that are intensively carried out in low-skill occupations. For example, computers and robots are still much less capable of driving taxis and cleaning offices than humans. Thus, the relative economy-wide demand for middle-skill routine occupations has declined substantially.
This routinization hypothesis, due to Autor, Levy, and Murnance, has been tested in many different settings and it is widely accepted as the main driving force of job polarization.
The effect of job polarization on wages
Around the same time as job polarization gathered steam in the US, the distribution of wages started polarizing as well. That is, real wages for middle-class workers stagnated while earnings of the lowest and the highest percentiles of the wage distribution increased. This is depicted in Figure 2.

Figure 2. Percentage growth of the quantiles of the US wage distribution since the end of the 1980s


Notes: The chart depicts the change in log real wages along the quantiles of the wage distribution between the two cohorts for the NLSY and the comparable years and age group in the CPS.

It thus seems natural to think that the polarization of wages is just another consequence of the declining demand for routine tasks. However, there exists some evidence that is not entirely consistent with this thought: virtually all European countries experienced job polarization as well, yet most of them haven’t seen wage polarization but rather a continued increase in inequality across the board. Moreover, other factors that may have generated wage polarization in the US have been proposed (e.g. an increase in the minimum wage, de-unionization, and ‘classical’ skill-biased technical change).

In my recent paper I try to establish a closer link between job polarization and workers’ wages (Boehm 2013). In particular, I ask three interrelated questions:

  • First, have the relative wages of workers in middle-skill occupations declined as should be expected by the routinization hypothesis?
  • Second, have the relative wage rates paid per ‘constant unit of skill’ in the middle-skill occupations dropped with polarization?
  • Third, can job polarization explain the changes in the overall wage distribution?

I answer these questions by analyzing two waves of a representative survey of teenagers in the US carried out in 1979 and 1997. The survey responses provide detailed and multidimensional characteristics of these young people that influence their occupational choices and wages when they are 27 years old in the end of the 1980s and the end of the 2000s.

Using these characteristics, I compute the probabilities of workers in the 1980s and today choosing middle-skill occupations and then compare the wages associated with these probabilities over time. My empirical strategy relies on predicting the occupations that today’s workers would have chosen had they lived in the 1980s and then comparing their wages to those of workers who actually chose these occupations at that time.

The results from this approach show a substantial negative effect of job polarization on middle-skill workers. The positive wage effect associated with a 1% higher probability of working in high-skill jobs (compared to middle-skill jobs) almost doubled between the 1980s and today. The negative wage effect associated with a 1% higher probability of working in low-skill services jobs compared with middle-skill jobs attenuated by over a third over the same period.

I find similar results when controlling for college education, which is arguably a measure of absolute skill. This suggests that it is indeed the relative advantage in the middle-skill occupations for which the returns in the labor market have declined.

In the next step of my analysis, I estimate the changes in relative market wage rates that are offered for a constant unit of skill in each of the three occupational groups. Again, the position of the middle-skill occupations deteriorates substantially: the wage rates paid in the high-skill occupations increased by 20% compared to the middle while the wage rate in the low-skill occupations rose by 30%. This decline in the relative attractiveness of working in middle-skill occupations is consistent with the massive outflow of workers from these jobs.

Finally, I check what effect the changing prices of labour may have had on the overall wage distribution and whether they can explain the wage polarization that we observe in the US. Figure 3 shows that the change in the wage distribution due to these price effects reproduces the overall distribution reasonably well in the upper half while it fails to match the increase of wages for the lowest earners compared to middle earners.

Figure 3. Actual and counterfactual changes in the US wage distribution


Notes: The chart plots the actual and counterfactual changes in the wage distribution in the NLSY when workers in 1980s are assigned the estimated price changes in their occupations.
At first glance, this is surprising given the strong increase in relative wage rates for low-skill work and the increase in the wages of workers in low-skill occupations. The reason is that these workers now move up in the wage distribution, which lifts not only the (low) quantiles where they started out but also the (middle) quantiles where they end up. The inverse happens for workers in middle-skill occupations but with the same effect on the wage distribution.
Despite the above findings, my paper does not provide the last word about the effect of job polarisation on the bottom of the wage distribution. This is because, for example, my estimates do not take into account potential additional wage effects from workers moving out of the middle-skill occupations into low-skill occupations. Therefore, we cannot yet finally assess the role that job polarisation versus policy factors (such as the raise of the minimum wage) played on the lower part of the wage distribution in the US.
However, what emerges unambiguously from my work is that routinization has not only replaced middle-skill workers’ jobs but also strongly decreased their relative wages. Policymakers who intend to counteract these developments may want to consider the supply side: if there are investments in education and training that help low and middle earners to catch up with high earners in terms of skills, this will also slow down or even reverse the increasing divergence of wages between those groups. In my view, the rising number of programs that try to tackle early inequalities in skill formation are therefore well-motivated from a routinization-perspective.
Acemoglu, D and D H Autor (2011), “Skills, Tasks and Technologies: Implications for Employment and Earnings”, in Handbook of Labor Economics edited by Orley Ashenfelter and David Card, Vol. 4B, Ch. 12, 1043-1171.
Autor, D H (2010), "The polarization of job opportunities in the US labour market: Implications for employment and earnings", Center for American Progress and The Hamilton Project.
Autor, D H and D. Dorn (2013), “The Growth of Low-Skill Service Jobs and the Polarization of the US Labor Market”, The American Economic Review, 103(5), 1553–97.
Autor D H, L F Katz, and M S Kearney (2006), "The Polarization of the US Labor Market", The American Economic Review 96.2, 189-194.
Autor D H, F Levy and R Murnane (2003), ‘The Skill Content of Recent Technological Change: An Empirical Exploration’, Quarterly Journal of Economics 118(4): 1279-1333.
Boehm, M J (2013), “The Wage Effects of Job Polarization: Evidence from the Allocation of Talents”, Working Paper.
Goos, M and A Manning (2007), "Lousy and lovely jobs: The rising polarization of work in Britain", The Review of Economics and Statistics 89.1, 118-133.
Goos M, A Manning and A Salomons (2009), “Explaining Job Polarization in Europe: The Roles of Technology, Globalization and Institutions”, American Economic Review Papers and Proceedings 99(2): 58-63
Michaels G, A Natraj, and J Van Reenen (2013), “Has ICT Polarized Skill Demand? Evidence from Eleven Countries over 25 Years”, forthcoming in Review of Economics and Statistics; earlier version available as CEP Discussion Paper No. 987 (
Spitz-Oener, A (2006), "Technical change, job tasks, and rising educational demands: Looking outside the wage structure", Journal of Labor Economics 24.2, 235-270.
1 This figure and the ones below are based on two representative samples for 27 year old males in the United States (the National Longitudinal Survey of Youth (NLSY) and the Current Population Survey (CPS)). For qualitatively similar statistics on all prime age workers, refer to Acemoglu & Autor (2011).
2 Examples of tests of the routinization hypothesis include Michaels et al (2013) who find that industries with faster growth of information and communication technology had greater decreases in the relative demand for middle educated workers; Spitz-Oener (2006) who shows that job tasks have become more complex in occupations that rapidly computerized; and Autor and Dorn (2013) who show that local labour markets that specialised in routine tasks adopted information technology faster and experienced stronger job polarisation.
3 For the details of this estimation, please refer to the paper.

Thursday, January 23, 2014

Don't Blame the Robots for our Wage or Job Problems

I'm a bit more sympathetic to the skill based technical change causing wage inequality arguments than Larry Mishel, technological change is at least part of the story in my view (but, importantly, not the whole story, unionization and relative bargaining power between workers and firms, political forces, etc. are also at work), but his arguments are certainly worth noting (and this extract may not fully reflect his views):

The Robots Are Here and More Are Coming: Do Not Blame Them for our Wage or Job Problems, , by Lawrence Mishel: The “robots are coming” narrative dominating discussions of the economy  was popularized by Erik Brynjolfsson and Andrew McAfee in their 2011 book, Race Against the Machine. They have built on that theme in the richer, deeper The Second Machine Age (W.W. Norton, 2014). The first half of the book provides a valuable window, at least for a non-technologist like me, into past developments and the future trajectory of digitization. Their claim is that digitization will do for mental power what the steam engine did for muscle power—that is, quite a bit, transforming our lives at work and play.
The remainder of the book dwells on the role of digitization in generating both bounty (more consumer choice and greater output, wealth, and income) and spread (greater inequalities of wages, income, and wealth). In treating these topics, they heavily rely on the work of others. As in their last book, they do not provide much direct evidence of the connection between technological change and wage inequality. I study these issues and believe they are wrong to tightly link digitization and robots to wage inequality and the slow job growth of the 2000s. Although the authors claim “technology is certainly not the only force causing this rise in spreads, but it is one of the main ones” my fear is that this book, like their last one, will fuel the mistaken narrative that technology is responsible for our job and wage problems and that we are powerless to obtain more equitable growth. ...
On wage inequality, the authors offer “skill-biased technical change” or SBTC as the explanation. In fact, they offer two distinct SBTC narratives, both of which cannot be simultaneously true and neither of which aptly explains wage trends.
In general, SBTC narratives are weak because they cannot explain one of the key inequality trends, the remarkable wage and income growth of the top 1.0 and 0.1 percent. ...
Specifically, the authors’ first SBTC narrative, the “race between technology and skills,” falls short because it doesn’t square with recent trends. Under this narrative, technological change makes employers value education more, and the more education or skills one has, the better one fares. Despite the absence of prima facie evidence for this popular narrative for two decades, it barrels along anyway. For instance, the wage gap between middle and low-wage workers has been stable or falling since 1987 or so, meaning that those with the least skills have done at least as well or better than those in the middle. ...
The second narrative is that technology is eroding jobs and wages in middle-wage occupations but expanding opportunities and wages among low- and high-wage occupations. This “job polarization” narrative, which emerged around 2006, was designed to overcome the flaw in the education narrative’s explanation of wage trends in the 1990s, when low-wage workers fared as well or better than middle-wage workers. The accumulating evidence now shows that job polarization has not occurred in the entire 2000s...
So, again, these two SBTC narratives can’t both be true—either middle-wage workers are doing better than low-wage workers or they’re not. And neither one can explain the trends of the 2000s, the period where one would expect digitization’s impact to be most evident. The robots are here and more will be coming but they are not responsible for our employment or our wage problems. Read the first half of the book to learn about technology but take the second half with a grain of salt. For understanding wage inequality you should look elsewhere.

Tuesday, December 24, 2013

'Robots and Economic Luddites'

Dean Baker:

Robots and Economic Luddites: They Aren't Taking Our Jobs Quickly Enough: Lydia DePillis warns us in the Post of 8 ways that robots will take our jobs. It is amazing how the media have managed to hype the fear of robots taking our jobs at the same time that they have built up fears over huge budget deficits bankrupting the country. You don't see the connection? Maybe you should be an economics reporter for a leading national news outlet.
Okay, let's get to basics. The robots taking our jobs story is a story of labor surplus, too many workers, too few jobs. Everything that needs to be done is being done by the robots. There is nothing for the rest of us to do but watch.
There can of course be issues of distribution. If the one percent are able to write laws that allow them to claim everything the robots produce then they can make most of us very poor. But this is still a story of society of plenty. We can have all the food, shelter, health care, clean energy, etc. that we need; the robots can do it for us.
Okay, now let's flip over to the budget crisis that has the folks at the Washington Post losing sleep. This is a story of scarcity. We are spending so much money on our parents' and grandparents' Social Security and Medicare that there is no money left to educate our kids.
Some confused souls may say that the problem may not be an economic one, but rather a fiscal problem. The government can't raise the tax revenue to pay for both the Social Security and Medicare for the elderly and the education of our kids. This is confused because if we are living in the world where the robots are doing all the work then the government really doesn't need to raise tax revenue, it can just print the money it needs to back its payments.
Okay, now everyone is completely appalled. The government is just going to print trillions of dollars? That will send inflation through the roof, right? Not in the world where robots are doing all the work it won't. If we print money it will create more demands for goods and services, which the robots will be happy to supply. As every intro econ graduate knows, inflation is a story of too much money chasing too few goods and services. But in the robots do everything story, the goods and services are quickly generated to meet the demand. Where's the inflation, robots demanding higher wages?
In short, you can craft a story where we have huge advances in robot technology so that the need for human labor is drastically reduced. You can also craft a story where an aging population leads to too few workers being left to support too many retirees. However, you can't believe both at the same time unless you write on economic issues for the Washington Post.
Just in case anyone cares about what the data says on these issues, the robots don't seem to be winning out too quickly. Productivity growth has slowed sharply over the last three years and it is well below the pace of 1947-73 golden age. (Robots are just another form of good old-fashioned productivity growth.)

labor productivity

On the other hand, the scarcity mongers don't have much of a case either. Even if productivity growth stays at just a 1.5 percent annual rate its impact on raising wages and living standards will swamp any conceivable tax increases associated with caring for a larger population of retirees.

Thursday, November 21, 2013

'Don’t Blame the Robots: Assessing the Job Polarization Explanation of Growing Wage Inequality'

From Lawrence Mishel, Heidi Shierholz, and John Schmitt:

Don’t Blame the Robots: Assessing the Job Polarization Explanation of Growing Wage Inequality, by Lawrence Mishel, Heidi Shierholz, and John Schmitt, EPI–CEPR Working Paper: Executive summary Many economists contend that technology is the primary driver of the increase in wage inequality since the late 1970s, as technology-induced job skill requirements have outpaced the growing education levels of the workforce. The influential “skill-biased technological change” (SBTC) explanation claims that technology raises demand for educated workers, thus allowing them to command higher wages—which in turn increases wage inequality. A more recent SBTC explanation focuses on computerization’s role in increasing employment in both higher-wage and lower-wage occupations, resulting in “job polarization.” This paper contends that current SBTC models—such as the education-focused “canonical model” and the more recent “tasks framework” or “job polarization” approach mentioned above—do not adequately account for key wage patterns (namely, rising wage inequality) over the last three decades. Principal findings include:
1. Technological and skill deficiency explanations of wage inequality have failed to explain key wage patterns over the last three decades, including the 2000s.
The early version of the “skill-biased technological change” (SBTC) explanation of wage inequality posited a race between technology and education where education levels failed to keep up with technology-driven increases in skill requirements, resulting in relatively higher wages for more educated groups, which in turn fueled wage inequality (Katz and Murphy 1992; Autor, Katz, and Krueger 1998; and Goldin and Katz 2010). However, the scholars associated with this early, and still widely discussed, explanation highlight that it has failed to explain wage trends in the 1990s and 2000s, particularly the stability of the 50/10 wage gap (the wage gap between low- and middle-wage earners) and the deceleration of the growth of the college wage premium since the early 1990s (Autor, Katz, and Kearney 2006; Acemoglu and Autor 2012). This motivated a new technology-based explanation (formally called the “tasks framework”) focused on computerization’s impact on occupational employment trends and the resulting “job polarization”: the claim that occupational employment grew relatively strongly at the top and bottom of the wage scale but eroded in the middle (Autor, Levy, and Murnane 2003; Autor, Katz, and Kearney 2006; Acemoglu and Autor 2012; Autor 2010). We demonstrate that this newer version—the task framework, or job polarization analysis—fails to explain the key wage patterns in the 1990s it intended to explain, and provides no insights into wage patterns in the 2000s. We conclude that there is no currently available technology-based story that can adequately explain the wage trends of the last three decades.
2. History shows that middle-wage occupations have shrunk and higher-wage occupations have expanded since the 1950s. This has not driven any changed pattern of wage trends.
We demonstrate that key aspects of “job polarization” have been taking place since at least 1950. We label this “occupational upgrading” since it primarily consists of shrinkage in relative employment in middle-wage occupations and a corresponding expansion of employment in higher-wage occupations. Lower-wage occupations have remained a small (less than 15 percent) and relatively stable share of total employment since the 1950s, though they have grown in importance in the 2000s. Occupational upgrading has occurred in decades with both rising and falling wage inequality and in decades with both rising and falling median wages, indicating that occupational employment patterns, by themselves, cannot explain the salient wage trends.
3. Evidence for job polarization is weak.
We use the Current Population Survey to replicate existing findings on job polarization, which are all based on decennial census data. Job polarization is said to exist when there is a U-shaped plot in changes in occupational employment against the initial occupational wage level, indicating employment expansion among high- and low-wage occupations relative to middle-wage occupations. As shown in Figure E (explained later in the paper but introduced here), in important cases, these plots do not take the posited U-shape. More importantly, in all cases the lines traced out fit the data very poorly, obscuring large variations in employment growth across occupational wage levels.
4. There was no occupational job polarization in the 2000s.
In the 2000s, relative employment expanded in lower-wage occupations, but was flat at both the middle and the top of the occupational wage distribution. The lack of overall job polarization in the 2000s is a phenomenon visible in both the analyses of decennial census/American Community Survey data provided by proponents of the tasks framework/job polarization perspective (Autor 2010; Acemoglu and Autor 2012) and in our analysis of the Current Population Survey. Thus, the standard techniques applied to the data for the 2000s do not establish even a prima facie case for the existence of overall job polarization in the most recent decade. This leaves the job polarization story, at best, as an account of wage inequality in the 1990s. It certainly calls into question whether it should be a description of current labor market trends and the basis of current policy decisions.
5. Occupational employment trends do not drive wage patterns or wage inequality.
We demonstrate that the evidence does not support the key causal links between technology-driven changes in tasks and occupational employment patterns and wage inequality that are at the core of the tasks framework and job polarization story. Proponents of job polarization as a determinant of wage polarization have, for the most part, only provided circumstantial evidence: both trends occurred at the same time. The causal story of the tasks framework is that technology (i.e., computerization) drives changes in the demand for tasks (increasing demand at the top and bottom relative to the middle), producing corresponding changes in occupational employment (increasing relative employment in high- and low-wage occupations relative to middle-wage occupations). These changes in occupational employment patterns are said to drive changes in overall wage patterns, raising wages at the top and bottom relative to the middle. However, the intermediate step in this story must be that occupational employment trends change the occupational wage structure, raising relative wages for occupations with expanding employment shares and vice-versa. We demonstrate that there is little or no connection between decadal changes in occupational employment shares and occupational wage growth, and little or no connection between decadal changes in occupational wages and overall wages. Changes within occupations greatly dominate changes across occupations so that the much-focused-on occupational trends, by themselves, provide few insights.
6. Occupations have become less, not more, important determinants of wage patterns.
The tasks framework suggests that differences in returns to occupations are an increasingly important determinant of wage dispersion. Using the CPS, we do not find this to be the case.  We find that a large and increasing share of the rise in wage inequality in recent decades (as measured by the increase in the variance of wages) occurred within detailed occupations.  Furthermore, using DiNardo, Fortin, and Lemieux’s reweighting procedure, we do not find that occupations consistently explain a rising share of the change in upper tail and lower tail inequality for either men or women.
7. An expanded demand for low-wage service occupations is not a key driver of wage trends.
We are skeptical of the recent efforts of Autor and Dorn (2013) that ask the low-wage “service occupations” to carry much or all of the weight of the tasks framework. First, the small size and the slow, relatively steady growth of the service occupations suggest significant limitations of a technology-driven expansion of service occupations to be able to explain the large and contradictory changes in wage growth at the bottom of the distribution (i.e., between middle and low wages, the 50/10 wage differential), let alone movements at the middle or higher up the wage distribution. The service occupations remain a relatively small share of total employment; in 2007, they accounted for less than 13 percent of total employment, and just over half of employment in the bottom quintile of occupations ranked by wages. Moreover, these occupations have expanded only modestly in recent decades, increasing their employment share by 2.1 percentage points between 1979 and 2007, with most of the gain in the 2000s. Relative employment in all low-wage occupations, taken together, has been stable for the last three decades, representing a 21.1 percent share of total employment in 1979, 19.7 percent in 1999, and 20.0 percent in 2007.
Second, the expansion of service occupation employment has not driven their wage levels and therefore has not driven overall wage patterns. The timing of the most important changes in employment shares and wage levels in the service occupations is not compatible with conventional interpretations of the tasks framework. Essentially all of the wage growth in the service occupations over the last few decades occurred in the second half of the 1990s, when the employment share in these occupations was flat. The observed wage increases preceded almost all of the total growth in service occupations over the 1979–2007 period, which took place in the 2000s, when service occupation wages were falling (another trend that contradicts the overall claim of the explanatory power of service occupation employment trends).
8. Occupational employment trends provide only limited insights into the main dynamics of the labor market, particularly wage trends.
A more general point can and should be drawn from our findings: Occupational employment trends do not, by themselves, provide much of a read into key labor market trends because changes within occupations are dominant. Recent research and journalistic treatment of the labor market has highlighted the pattern of occupational employment growth to assess the extent of structural unemployment, the disproportionate increase in low-wage jobs, and the “coming of robots”—changes in workplace technology and the consequent impact on wage inequality. The recent academic literature on wage inequality has highlighted the role of changes in the occupational distribution of employment as the key factor. In particular, occupational employment trends have become increasingly used as indicators of job skill requirement changes, reflecting the outcome of changes in the nature of jobs and the way we produce goods and services. Our findings indicate, however, that occupational employment trends give only limited insight and leave little imprint on the evolution of the occupational wage structure, and certainly do not drive changes in the overall wage structure. We therefore urge extreme caution in drawing strong conclusions about overall labor market trends based on occupational employment trends by themselves.

I suppose I should note that I haven't read this closely enough yet to endorse every word (or not). Full paper here (scroll down).

Thursday, October 24, 2013

'Physiocracy and Robots'

Yet another travel day, can't remember the last weekend I was home (no complaints though), so one more from Brad DeLong and that's it for awhile:

Physiocracy and Robots, by Brad DeLong: The physiocrats saw France as having four kinds of jobs:

  • Farmers
  • Skilled artisans
  • Flunkies
  • Landowning aristocrats

Farmers, they thought, produced the net value in the economy--the net product. Their labor combined with water, soil, and sun grew the food they and others ate. Artisans, the physiocrats thought, were best seen not as creators but as transformers of wealth--transformers of wealth in the form of food into wealth in the form of manufactures. Aristocrats collected this net product--agricultural production in excess of farmers' subsistence needs--and spent it buying manufactured goods and, when they got sated with manufactured goods, employing flunkies.

In this framework, the key economic variables are:

  • the fraction f who are farmers.
  • the net product per farmer n.
  • the fraction who can be set to work making manufactured goods that aristocrats can consume before becoming sated m.

The key equilibrium quantity in this system is:

(nf-m)/(1-f-m) = W

This gives the standard of living of the typical flunky--say, a runner for His Grace the Cardinal. The numerator is the amount of resources on which flunkies can subsist. The denominator is the number of flunkies. If this quantity W is low, the country is poor: flunkies are ill-paid, begging and thievery are rampant, and the reserve army of potential unemployed puts downward pressure on artisan and farmer living standards as well. If this quantity is high, the country is prosperous.

The physiocrats saw a France undergoing a secular decline in the farmer share f, and they worried. A fall in f produced a sharper decline in W. Therefore they called for:

  • Scientific farming to boost n and so boost the net product nf.
  • A reallocation of the tax burden to make it less onerous to be a farmer--and so boost the farmer share f and so boost the net product nf.

With the unquestioned assumption that there were limits on how high the net product per farmer n could be pushed, the physiocrats would have forecast that France of today, with only 5% of the population farmers, would be a hellhole: huge numbers of ill-paid flunkies sucking up to the aristocratic landlords.

Well, the physiocrats were wrong about the decline of the agricultural share of the labor force. And let us hope that the techno-pessimists are similarly wrong about the rise of the robots.

Saturday, October 12, 2013

'The ICT Revolution Isn’t Over'

Paul Krugman:

The ICT Revolution Isn’t Over: ...I thought I would make one casual observation about technology. Here it is:... the relatively limited impact so far of the much-heralded rise of ICT — information and communication technologies. For a long time these technologies seemed to be doing nothing for the economy; then, finally, they seemed to kick in circa 1995. But the new era of productivity growth, as Bob says, wasn’t a match for the long boom post World War II, and seemed to have petered out by the late 2000s.
What I’d note, however, is that there is almost surely a second wind coming. The 1995-2007 productivity rise was basically a “wired” phenomenon, a lot of it having to do with local area networks rather than the Internet. Wireless data is a whole different thing, and it’s a surprisingly recent thing — the iPhone was introduced in 2007, the iPad in 2010. And we know from repeated experience that it takes quite a while for new technologies to show up in economic growth, a point famously made by Paul David and confirmed by the 25-year lag between the introduction of the microprocessor and the 90s productivity takeoff.
So there’s more coming. How big is another question.

Thursday, September 26, 2013

The New Normal? Slower R&D Spending

From the Atlanta Fed's macroblog:

The New Normal? Slower R&D Spending: In case you need more to worry about, try this: the pace of research and development (R&D) spending has slowed. The National Science Foundation defines R&D as “creative work undertaken on a systematic basis in order to increase the stock of knowledge” and application of this knowledge toward new applications. ...
R&D spending is often cited as an important source of productivity growth within a firm, especially in terms of product innovation. But R&D is also an inherently risky endeavor, since the outcome is quite uncertain. So to the extent that economic and policy uncertainty has helped make businesses more cautious in recent years, a slow pace of R&D spending is not surprising. On top of that, the federal funding of R&D activity remains under significant budget pressure. See, for example, here.
So you can add R&D spending to the list of things that seem to be moving more slowly than normal. Or should we think of it as normal?

Sunday, September 08, 2013

'Is Technological Progress a Thing of the Past?'

Has technological progress slowed down?:

Is technological progress a thing of the past?, by Joel Mokyr, Vox EU: Technological progress has been at the heart of economic growth for two centuries. Some authors, however, have suggested that product and process innovation are running out of steam:

  • Robert J Gordon and Tyler Cowen, inter alia, have expressed the view that technological progress is slowing down (Gordon 2012, Cowen 2011).
  • Jan Vijg has suggested that the industrialised West of the 21st century will resemble the declining Empires of late Rome and Qing China (Vijg 2011).

Their basic point is that technological dynamism is fizzling out. The low-hanging fruits that have improved our lives so much in the 20th century have all been picked. We should be ready for a more stagnant world in which living standards rise little if at all.

History and the future

History is always a bad guide to the future and economic historians should avoid making predictions. All the same, the historical records provide some insights into what makes societies technologically creative. Such insights, in turn can be used at the basis for looking ahead to assess how likely such a decline is to take place.

The answer is short and simple: we ain’t seen nothin’ yet, the best is still to come.

Supply and the demand sides of innovation

My argument concerns both the supply and the demand sides of innovation. Starting with supply, what is it that accounts for sustained technological progress? The relation between scientific progress and technology is a complex two-way street. For example, 19th-century energy-physics learned more from the steam engine than the other way around.

The historical record makes clear that science depends on technology in that it depends on the instruments and tools that are needed for science to advance. New instruments opened new horizons in what Derek Price called "artificial revelation”, observations through instruments that allow us to see things that would otherwise be invisible.


  • The Scientific Revolution of the 17th century depended critically on the development of the telescope, the microscope, the barometer, the vacuum pump, and similar contraptions.
  • The achromatic-lens microscope developed by Joseph J Lister (father of the famous surgeon) in the 1820s paved the way for the germ theory, the greatest breakthrough in medicine before 1900.

The same was true in physics, for instance:

  • The equipment designed by Heinrich Hertz allowed him to detect electromagnetic radiation in the 1880s and Robert Millikan’s ingenious oil-drop apparatus allowed him to measure the electric charge of an electron (1911).

In the twentieth century, the impact of instruments on progress is even more apparent. For example:

  • X-ray crystallography, developed in 1912, was crucial forty years later in the discovery of the structure of DNA.

If tools and instruments are a key to further scientific progress, it is hard not to be impressed by the possibilities of the 21st century:

  • DNA sequencing machines and cell analysis through flow cytometry (to mention but two) have revolutionised molecular microbiology.
  • High-powered computers are helping research in every domain conceivable, from content analysis in novels to the (very hard) problems of turbulence.
  • Astronomy, nanochemistry, and genetic engineering are all areas in which progress has been mind-boggling in the past few decades thanks to better tools.

To be sure, there is no automatic mechanism that turns better science into improved technology. But there is one reason to believe that in the near future it will do so better and more efficiently than ever before. The reason is access.

Inventors, engineers, applied chemists, and physicians all need access to best-practice science to answer an infinite list of questions about what can and cannot be done. Search engines were invented in the 18th century through encyclopaedias and compendia that arranged all available knowledge in alphabetical order, making it easy to find. Textbooks had indexes that did the same. Libraries developed cataloguing systems and other techniques that made scientific information findable.

But these search systems have their limitations. One might have feared that the explosion of scientific knowledge in the 20th century could outrun our ability to find what we are looking for. Yet the reverse has happened. The development of searchable databanks of massive sizes has even outrun our ability to generate scientific knowledge. Copying, storing, transmitting, and searching vast amounts of information today is fast, easy, and practically free. We no longer deal with megabytes or gigabytes. Instead terms like petabytes (a million gigabytes) and zettabytes (a million petabytes) are being bandied about. Scientists can now find the tiniest needles in data haystacks as large as Montana in a fraction of a second.
And if science sometimes still proceeds by ‘trying every bottle on the shelf’ – as in some areas it still does – it can search with blinding speed over many more bottles, perhaps even peta-bottles.

Have all the low-hanging fruits been picked?

One answer is that the analogy is flawed. Science builds taller and taller ladders, so we can reach the upper branches, and then the branches above them.

  • A less obvious answer is that technological progress is fundamentally a dis-equilibrating process.

Whenever a technological solution is found for some human need, it creates a new problem. As Edward Tenner put it, technology ‘bites back’. The new technique then needs a further ‘technological fix’, but that one in turn creates another problem, and so on. The notion that invention definitely ‘solves’ a human need, allowing us to move to pick the next piece of fruit on the tree is simply misleading.

  • Each solution perturbs some other component in the system and sows the seed of more needs; the ‘demand’ for new technology is thus self-sustaining.

The most obvious example for such a dynamic is in our never-ending struggles with insects and harmful bacteria. In those wars, evolutionary mechanisms decree that after most battles we win, the enemy regroups by becoming resistant to whatever poison we throw at them. Drug-resistant bacteria are increasingly common and require novel approaches to new antibiotics. The search for novel antibiotics will resume with tools that Chain and Florey would never have dreamed of – but even such new antibiotics will eventually lead to adaptation.

In agriculture, the advance in fertiliser use has helped avert the Malthusian disasters that various doom-and-gloom authors predicted. But the vast increase in nitrate use following Fritz Haber’s epochal invention of the nitrogen-fixing process before World War I has now led to serious environmental problems in aquifer pollution and algae blooms. Again, technology will provide us with a fix, possibly through genetic engineering in which more plants can fix their own nitrates rather than needing fertiliser or bacteria that convert nitrates into nitrogen at more efficient rates.

Another example is energy: For better or for worse, modern technology has relied heavily on fossil fuels: first coal, then oil, and now increasingly on natural gas. The bite-back here has been planetary in scope: climate change is no longer a prospect, it is a reality. Can new technology stop it? There is no doubt that it can, even if nobody can predict right now what shape that will take, and if collective action difficulties will actually make it realistic.

What will the workers do?

Perhaps the biggest bite-back is what happens to human labour. If technology replaces workers, what will the role of people become? From Kurt Vonnegut to Erik Brynjolfsson, dystopias about an idle and vapid humanity in a robotised economy have worried people. There will be disruption and pain, but the new technology will also create new demand for workers, to perform tasks that a new technology creates.

  • In 1914 who could have imagined occupations such as video game programmer or identity-theft security guard?
  • Physical therapists, social media consultants, and TV sports commentators are all occupations created by new technology.

It seems plausible that the future, too, will create occupations we cannot imagine, let alone envisage. Furthermore, the task that 20th-century technology seems to have carried out the easiest is to create activities that fill the ever-growing leisure time that early retirement and shorter work-weeks have created. Technological creativity has responded to the growth of free time: a bewildering choice of programmes on TV, the rise of mass tourism, access at will to virtually every film made and opera written, and a vast pet industry are just some examples. The cockfights and eye-gouging contests with which working classes in the past entertained themselves have been replaced by a gigantic high-tech spectator-sports industrial complex, both local and global.

Keynes’ vision

In his brief Economic Possibilities for our Grandchildren (1931) Keynes foresaw much of the future impact of technology. His insights may surprise those who regard him as the prophet of unemployment: “all this [technological change] means in the long run [is] that mankind is solving its economic problem” (italics in original). Contemplating a world in which work itself would become redundant thanks to science and capital (Keynes did not envisage robots, but they would have strengthened his case), he felt that this age of leisure and abundance was frightening people because “we have been trained too long to strive and not to enjoy”.


Brynjolfsson, Erik and Andrew McAfee (2011), Race Against the Machine, New York, Digital Frontier Press.

Cowen, Tyler (2011), The Great Stagnation, New York, Dutton.

Gordon Robert J (2012), “Is US Economic Growth over? Faltering Innovation confronts the six Headwinds”, NBER Working paper series, 18315, August.

Mokyr, Joel (2002), The Gifts of Athena, Princeton, Princeton University Press.

Price, Derek J de Solla (1984a), “Notes towards a Philosophy of the Science/Technology Interaction” in Rachel Laudan (ed.) The Nature of Knowledge: are Models of Scientific Change Relevant?, Dordrecht, Kluwer.

Tenner, Edward (1996), Why Things Bite Back: Technology and the Revenge of Unintended Consequences, New York, Knopf.

Vijg, Jan (2011), The American Technological Challenge: Stagnation and Decline in the 21st Century, New York, Algora Publishing.

Vonnegut, Kurt (1974), Player Piano, New York, Dell Paperbacks.

Sunday, August 25, 2013

'How Technology Wrecks the Middle Class'

I've had many posts on the Autor and Dorn paper on the hollowing out of the middle class (see here too), and this is yet another, but let me add one thing. This explains the demise of the middle class as a result of technological change. However, there are those who argue that the troubles of the working class has happened for other reasons, e.g. the demise of unions as politicians favored business over labor, or that there have been other political/institutional changes that worked against the middle class. My own view is that it wasn't one or the other, both technology and politics mattered:

How Technology Wrecks the Middle Class, by David Autor and David Dorn, Commentary, NY Times: In the four years since the Great Recession officially ended, the productivity of American workers — those lucky enough to have jobs — has risen smartly. But the United States still has two million fewer jobs than before the downturn, the unemployment rate is stuck at levels not seen since the early 1990s and the proportion of adults who are working is four percentage points off its peak in 2000.
This job drought has spurred pundits to wonder whether a profound employment sickness has overtaken us. And from there, it’s only a short leap to ask whether that illness isn’t productivity itself. Have we mechanized and computerized ourselves into obsolescence?
Are we in danger of losing the “race against the machine,” as the M.I.T. scholars Erik Brynjolfsson and Andrew McAfee argue in a recent book? Are we becoming enslaved to our “robot overlords,” as the journalist Kevin Drum warned in Mother Jones? Do “smart machines” threaten us with “long-term misery,” as the economists Jeffrey D. Sachs and Laurence J. Kotlikoff prophesied earlier this year? Have we reached “the end of labor,” as Noah Smith laments in The Atlantic? ...

Sunday, August 18, 2013

'Does the Government Stifle Innovation? I Don’t See It (To the Contrary…)'

Jared Bernstein responds to the Robert Shiller article I linked to yesterday:

Does the Government Stifle Innovation? I Don’t See It (To the Contrary…): I usually find economist Robert Shiller’s commentaries resonant and insightful, but this one seemed more confusing than enlightening. The thrust of the piece is the concern that government activities to promote innovation can just as easily stifle it.

The piece introduces the notion of corporatism, from a new book by Ed Phelps. What means “corporatism”? It’s:

…a political philosophy in which economic activity is controlled by large interest groups or the government. Once corporatism takes hold in a society…people don’t adequately appreciate the contributions and the travails of individuals who create and innovate. An economy with a corporatist culture can copy and even outgrow others for a while…but, in the end, it will always be left behind. Only an entrepreneurial culture can lead.

... I don’t get it. While “entrepreneurial culture” will always be essential, many innovations that turned out to be economically important in the US have government fingerprints all over them. From machine tools, to railroads, transistors, radar, lasers, computing, the internet, GPS, fracking, biotech, nanotech—from the days of the Revolutionary War to today—the federal government has supported innovation often well before private capital would risk the investment (read about it here).

Shiller’s critical, for example, of the manufacturing innovation institutes that the White House has been both touting and setting up. He’s certainly right to ask what it is these new creations do and why we need them... But most manufacturers I’ve spoken to about them tells me they fill an important niche, essentially building a path through the Death Valley between the university lab and the factory floor. If so, that’s a classic coordination failure in which markets have been known to underinvest. ...

To be clear, my argument is not at all that government efforts in this area are all successful or are somehow always free of the corruption that is too common when politics enters the fray. My points are that a) many important innovations have involved government support somewhere along the way, and b) while one could and should worry about waste in this area, I’ve not seen evidence, nor does Shiller provide any, of stifling. ...

So I’d suggest we be more careful in where we point the corporatist finger.

Saturday, August 17, 2013

Shiller: Why Innovation Is Still Capitalism’s Star

Quick one -- see previous post -- from Robert Shiller:

Why Innovation Is Still Capitalism’s Star, by Robert Shiller, Commentary, NY Times: Capitalism is culture. To sustain it, laws and institutions are important, but the more fundamental role is played by the basic human spirit of independence and initiative.
The decisive role of the “spirit of capitalism” is an old concept, going back at least to Max Weber, but it needs refreshing today with new evidence and new thinking. Edmund S. Phelps, a professor of economics at Columbia University and a Nobel laureate, has written an interesting new book on the subject. It’s called “Mass Flourishing: How Grassroots Innovation Created Jobs, Challenge and Change” (Princeton University Press), and it contains a complex new analysis of the importance of an entrepreneurial culture.
Professor Phelps discerns a troubling trend in many countries, however, even the United States. He is worried about corporatism, a political philosophy in which economic activity is controlled by large interest groups or the government. Once corporatism takes hold in a society, he says, people don’t adequately appreciate the contributions and the travails of individuals who create and innovate. An economy with a corporatist culture can copy and even outgrow others for a while, he says, but, in the end, it will always be left behind. Only an entrepreneurial culture can lead.
Is the United States really becoming corporatist? I don’t entirely agree with such a notion. ...

Saturday, July 13, 2013

Computers and Unemployment: This Time is Different?

This essay/report by Frank Levy and Richard J. Murnane attempts to answer the question "How do we ensure American middle class prosperity in an era of ever-intensifying globalization and technological upheaval?":

Dancing with Robots: Human Skills for Computerized Work, by Frank Levy and Richard J. Murnane: On March 22, 1964, President Lyndon Johnson received a short, alarming memorandum from the Ad Hoc Committee on the Triple Revolution. The memo warned the president of threats to the nation beginning with the likelihood that computers would soon create mass unemployment:

A new era of production has begun. Its principles of organization are as different from those of the industrial era as those of the industrial era were different from the agricultural. The cybernation revolution has been brought about by the combination of the computer and the automated self-regulating machine. This results in a system of almost unlimited productive capacity which requires progressively less human labor. Cybernation is already reorganizing the economic and social system to meet its own needs.

The memo was signed by luminaries including Nobel Prize winning chemist Linus Pauling, Scientific American publisher Gerard Piel, and economist Gunnar Myrdal (a future Nobel Prize winner). Nonetheless, its warning was only half right. There was no mass unemployment— since 1964 the economy has added 74 million jobs. But computers have changed the jobs that are available, the skills those jobs require, and the wages the jobs pay.

For the foreseeable future, the challenge of “cybernation” is not mass unemployment but the need to educate many more young people for the jobs computers cannot do. Meeting the challenge begins by recognizing what the Ad Hoc Committee missed—that computers have specific limitations compared to the human mind. Computers are fast, accurate, and fairly rigid. Human brains are slower, subject to mistakes, and very flexible. By recognizing computers’ limitations and abilities, we can make sense of the changing mix of jobs in the economy. We can also understand why human work will increasingly shift toward two kinds of tasks: solving problems for which standard operating procedures do not currently exist, and working with new information— acquiring it, making sense of it, communicating it to others. ...

Friday, June 14, 2013

Paul Krugman: Sympathy for the Luddites

If the share of income going to labor continues to decline, how should we respond?:

Sympathy for the Luddites, by Paul Krugman, Commentary, NY Times: In 1786, the cloth workers of Leeds, a wool-industry center in northern England, issued a protest against the growing use of “scribbling” machines, which were taking over a task formerly performed by skilled labor. “How are those men, thus thrown out of employ to provide for their families?” asked the petitioners. “And what are they to put their children apprentice to?”
Those weren’t foolish questions. Mechanization eventually ... led to a broad rise in British living standards. But it’s far from clear whether typical workers reaped any benefits during the early stages of the Industrial Revolution; many workers were clearly hurt. And often the workers hurt most were those who had, with effort, acquired valuable skills — only to find those skills suddenly devalued.
So are we living in another such era? ... The McKinsey Global Institute recently released a report on a dozen major new technologies that it considers likely to be “disruptive”... and ... some of the victims of disruption will be workers who are currently considered highly skilled...
So should workers simply be prepared to acquire new skills? The woolworkers of 18th-century Leeds addressed this issue back in 1786: “Who will maintain our families, whilst we undertake the arduous task” of learning a new trade? Also, they asked, what will happen if the new trade, in turn, gets devalued by further technological advance?
And the modern counterparts of those woolworkers might well ask further, what will happen to us if, like so many students, we go deep into debt to acquire the skills we’re told we need, only to learn that the economy no longer wants those skills?
Education, then, is no longer the answer to rising inequality, if it ever was (which I doubt).
So what is the answer? If the picture I’ve drawn is at all right, the only way we could have anything resembling a middle-class society — a society in which ordinary citizens have a reasonable assurance of maintaining a decent life as long as they work hard and play by the rules — would be by having a strong social safety net, one that guarantees not just health care but a minimum income, too. And with an ever-rising share of income going to capital rather than labor, that safety net would have to be paid for to an important extent via taxes on profits and/or investment income.
I can already hear conservatives shouting about the evils of “redistribution.” But what, exactly, would they propose instead?

Friday, June 07, 2013

Total Information Awareness

Dan Little:

Total information awareness?, Understanding Society: I'm finding myself increasingly distressed at this week's revelations about government surveillance of citizens' communications and Internet activity. First was the revelation in the Guardian of a wholesale FISA court order to Verizon to provide all customer "meta-data" for a three-month period -- and the clarification that this order is simply a renewal of orders that have been in place since 2007. (One would certainly assume that there are similar orders for other communications providers.) And commentators are now spelling out how comprehensive this data is about each of us -- who we call, who those people call, when, where, … This comprehensive data collection permits the mother of all social network analysis projects -- to reconstruct the widening circles of persons with whom person X is associated. This is its value from an intelligence point of view; but it is also a dark, brooding risk to the constitutional rights and liberties of all of us.
Second is the even more shocking disclosure -- also in the Guardian -- of an NSA program called PRISM that claims (based on the secret powerpoint training document published by the Guardian) to have reached agreements with the major Internet companies to permit direct government access to their servers, without the intermediary of warrants and requests for specific information. (The companies have denied knowledge of such a program; but it's hard to see how the Guardian document could be a simple fake.) And the document claims that the program gives the intelligence agencies direct access to users' emails, videos, chats, search histories, and other forms of Internet activity.
Among the political rights that we hold most basic are the rights of political expression and association. It doesn't matter much if a government agency is able to work out the network graph of people with whom I am associated around the project of youth soccer in my neighborhood. But if I were an Occupy Wall Street organizer, I would be VERY concerned about the fact that government is able to work out the full graph of my associates, their associates, and times and place of communication. At the least this fact has a chilling effect on political organization and protest -- both of which are constitutionally protected rights of US citizens. At the worst it makes possible police intervention and suppression based on the "intelligence" that is gathered. And the activities of the FBI in the 1960s against legal Civil Rights organizations make it clear that agencies are fully capable of undertaking actions in excess of their legal mandate. For that matter, the rogue activities of an IRS office with respect to the tax-exempt status of conservative political organizations illustrates the same point in the same news cycle!
The whole point of a constitution is to express clearly and publicly what rights citizens have, and to place bright-line limits on the scope of government action. But the revelations of this week make one doubt whether a constitutional limitation has any meaning anymore. These data collection and surveillance programs are wrapped in tight secrecy -- providers are not permitted to make public the requests that have been presented to them. So the public has no legitimate way of knowing what kind of information collection, surveillance, and intelligence activity is being undertaken with respect to their activities. In the name of homeland security, the evidence says that government is prepared to transgress what we thought of as "rights" with abandon, and with massive force. (The NSA data center under construction in Utah gives some sense of the massiveness of these data collection efforts.)
We are assured by government spokespersons that appropriate safeguards are in place to ensure and preserve the constitutional rights of all of us. But there are two problems with those assurances, both having to do with secrecy. Citizens are not provided with any account by government about how these programs are designed to work, and what safeguards are incorporated. And citizens are prevented from knowing what the exercise and effects of these programs are -- by the prohibition against telecom providers of giving any public information about the nature of requests that are being made under these programs. So secrecy prevents the very possibility of citizen knowledge and believable judicial oversight. By design there is no transparency about these crucial new tools and data collection methods.
All of this makes one think that the science and technology of encryption is politically crucial in the Internet age, for preserving some of our most basic rights of legal political activity. Being able to securely encrypt one's communications so only the intended recipients can gain access to them sounds like a crucial right of self-protection against the surveillance state. And being able to anonymize one's location and IP address -- through services like TOR router systems -- also seems like an important ability that everyone ought to consider making use of. Voice services like Skype seem to be fully compromised -- Microsoft, the owner of Skype, was the first company to accept the PRISM program, according to the secret powerpoint. But perhaps new Internet-based voice technologies using "trust no one" encryption and TOR routers will return the balance to the user. Intelligence and law enforcement agencies sometimes suggest that only people with something to hide would use an anonymizer in their interactions on the Web. But given the MASSIVE personalized data collection that government is engaged in, it would seem that every citizen has an interest in preserving his or her privacy to whatever extent possible. Privacy is an important human value in general; and it is a crucial value when it comes to the exercise of our constitutional rights of expression and association.
Government has surely overstepped through creation of these programs of data collection and surveillance; and it is hard to see how to put the genie back in the bottle. One step would be the creation of much more stringent legal limits on the data collection capacity of agencies like NSA (and commercial agencies, for that matter). But how can we trust that those limits will be respected by agencies that are accustomed to working in the dark?

Thursday, May 30, 2013

'Labor Union Decline, Not Computerization, Main Cause of Rising Corporate Profits'

I haven't read this paper, so I can't say a lot about how much confidence to place in the results, but it did grab my attention (and I believe it's in one of the top journals for sociology):

Labor union decline, not computerization, main cause of rising corporate profits, EurekAlert: A new study suggests that the decline of labor unions, partly as an outcome of computerization, is the main reason why U.S. corporate profits have surged as a share of national income while workers' wages and other compensation have declined.
The study, "The Capitalist Machine: Computerization, Workers' Power, and the Decline in Labor's Share within U.S. Industries," which appears in the June issue of the American Sociological Review, explores an important dimension of economic inequality...
Tali Kristal, an assistant professor of sociology at the University of Haifa in Israel ... found that from 1979 through 2007, labor's share of national income in the U.S. private sector decreased by six percentage points. This means that if labor's share had stayed at its 1979 level (about 64 percent of national income), the 120 million American workers employed in the private sector in 2007 would have received as a group an additional $600 billion, or an average of more than $5,000 per worker, Kristal said.
"However, this huge amount of money did not go to the workers," Kristal said. "Instead, it went to corporate profits, mostly benefiting very wealthy individuals."
The question is: why did this happen?
"Some economists contend that computerization is the primary cause and that it has increased the productivity of machines and skilled workers, prompting firms to reduce their overall demand for labor, which resulted in the rise of corporate profits at the expense of workers' compensation," Kristal said. "But, if that were the case,... then labor's share should have declined in all economic sectors, reflecting the fact that computerization has occurred across the board in the past 30 to 40 years."
This is not the case, however... "It was highly unionized industries — construction, manufacturing, and transportation — that saw a large decline in labor's share of income," Kristal said. "By contrast, in the lightly unionized industries of trade, finance, and services, workers' share stayed relatively constant or even increased. So, what we have is a large decrease in labor's share of income and a significant increase in capitalists' share in industries where unionization declined, and hardly any change in industries where unions never had much of a presence. This suggests that waning unionization, which led to the erosion of rank-and file workers' bargaining power, was the main force behind the decline in labor's share of national income."
In addition to the erosion of labor unions, Kristal found that rising unemployment as well as increasing imports from less-developed countries contributed to the decline in labor's share.
"All of these factors placed U.S. workers in a disadvantageous bargaining position versus their employers," said Kristal...

Saturday, May 18, 2013

Bernanke: Economic Prospects for the Long Run

Chairman Ben S. Bernanke is an optimist when it comes to our long-run economic prospects (i.e. he does not endorse the notion that productivity is slowing). I'm with him. (This is a graduation speech Bernanke gave at Bard College at Simon's Rock, Great Barrington, Massachusetts):

Economic Prospects for the Long Run: Let me start by congratulating the graduates and their parents. The word "graduate" comes from the Latin word for "step." Graduation from college is only one step on a journey, but it is an important one and well worth celebrating.
I think everyone here appreciates what a special privilege each of you has enjoyed in attending a unique institution like Simon's Rock. It is, to my knowledge, the only "early college" in the United States; many of you came here after the 10th or 11th grade in search of a different educational experience. And with only about 400 students on campus, I am sure each of you has felt yourself to be part of a close-knit community. Most important, though, you have completed a curriculum that emphasizes creativity and independent critical thinking, habits of mind that I am sure will stay with you.
What's so important about creativity and critical thinking? There are many answers. I am an economist, so I will answer by talking first about our economic future--or your economic future, I should say, because each of you will have many years, I hope, to contribute to and benefit from an increasingly sophisticated, complex, and globalized economy. My emphasis today will be on prospects for the long run. In particular, I will be looking beyond the very real challenges of economic recovery that we face today--challenges that I have every confidence we will overcome--to speak, for a change, about economic growth as measured in decades, not months or quarters.
Many factors affect the development of the economy, notably among them a nation's economic and political institutions, but over long periods probably the most important factor is the pace of scientific and technological progress. Between the days of the Roman Empire and when the Industrial Revolution took hold in Europe, the standard of living of the average person throughout most of the world changed little from generation to generation. For centuries, many, if not most, people produced much of what they and their families consumed and never traveled far from where they were born. By the mid-1700s, however, growing scientific and technical knowledge was beginning to find commercial uses. Since then, according to standard accounts, the world has experienced at least three major waves of technological innovation and its application. The first wave drove the growth of the early industrial era, which lasted from the mid-1700s to the mid-1800s. This period saw the invention of steam engines, cotton-spinning machines, and railroads. These innovations, by introducing mechanization, specialization, and mass production, fundamentally changed how and where goods were produced and, in the process, greatly increased the productivity of workers and reduced the cost of basic consumer goods. The second extended wave of invention coincided with the modern industrial era, which lasted from the mid-1800s well into the years after World War II. This era featured multiple innovations that radically changed everyday life, such as indoor plumbing, the harnessing of electricity for use in homes and factories, the internal combustion engine, antibiotics, powered flight, telephones, radio, television, and many more. The third era, whose roots go back at least to the 1940s but which began to enter the popular consciousness in the 1970s and 1980s, is defined by the information technology (IT) revolution, as well as fields like biotechnology that improvements in computing helped make possible. Of course, the IT revolution is still going on and shaping our world today.
Now here's a question--in fact, a key question, I imagine, from your perspective. What does the future hold for the working lives of today's graduates? The economic implications of the first two waves of innovation, from the steam engine to the Boeing 747, were enormous. These waves vastly expanded the range of available products and the efficiency with which they could be produced. Indeed, according to the best available data, output per person in the United States increased by approximately 30 times between 1700 and 1970 or so, growth that has resulted in multiple transformations of our economy and society.1 History suggests that economic prospects during the coming decades depend on whether the most recent revolution, the IT revolution, has economic effects of similar scale and scope as the previous two. But will it?

Continue reading "Bernanke: Economic Prospects for the Long Run" »

Monday, April 29, 2013

'The Welfare Queen of Denmark'

[Listening to Nouriel Roubini's pessimism about the future during the lunch panel as I do this -- the video of the panel discussing the state of the global economy should be available later today.]

Nancy Folbre objects to the "gendered language" used in the debate over social insurance programs, and to the conclusion that "cuddly" capitalism is bad for innovation:

The Welfare Queen of Denmark, by Nancy Folbre, Commentary, NY Times: ...In short, the Danish record offers no support for the social-spending-hurts-growth position. That doesn’t mean that some economists can’t figure out a way to make that argument anyway. For instance, Daron Acemoglu, James A. Robinson and Thierry Verdier have devised a theoretical model to show why what they term “cuddly” capitalism of the Danish sort may just be free-riding on the “cutthroat” capitalism of the United States sort.
The model posits that cutthroat levels of inequality, as in the United States, promote high levels of technological innovation. The benefits of these innovations cross national borders to help Danes and other Scandinavians achieve growth. In other words, they may be able to get away with being “cuddly,” but some country (like the United States) just has to be tough enough to reward risk-taking, even if it leads to hurt feelings.
The gendered language deployed in this model echoes a general tendency to view social spending in feminine terms: women like to cuddle and are often described as more risk-averse than men. It’s not uncommon to see the term “nanny state” used as a synonym for the welfare state.
Call the Scandinavians sissies if you like, but plenty of evidence in the latest World Competitiveness Report testifies to high levels of overall innovation there — as you might expect in economies even more export-oriented than our own. Danes are world leaders in renewable energy technology, especially wind power. ...

As I've noted before, "an enhanced safety net -- a backup if things go wrong -- can give people the security they need to take a chance on pursuing an innovative idea that might die otherwise, or opening a small business. So it may be that an expanded social safety net encourages innovation."

Wednesday, March 27, 2013

'Do Intellectual Property Rights on Existing Technologies Hinder Subsequent Innovation?'

Out and about today, so quickly:

Do intellectual property rights on existing technologies hinder subsequent innovation?, EurekAlert: A recent study (Journal of Political Economy 121:1 February 2013) suggests that some types of intellectual property rights discourage subsequent scientific research.
"The goal of intellectual property rights – such as the patent system – is to provide incentives for the development of new technologies. However, in recent years many have expressed concerns that patents may be impeding innovation if patents on existing technologies hinder subsequent innovation," said Heidi Williams, author of the study. "We currently have very little empirical evidence on whether this is a problem in practice."
Williams investigated the sequencing of the human genome by the public Human Genome Project and the private firm Celera. Genes sequenced first by Celera were covered by a contract law-based form of intellectual property, whereas genes sequenced first by the Human Genome Project were placed in the public domain. Although Celera's intellectual property lasted a maximum of two years, it enabled Celera to sell its data for substantial fees and required firms to negotiate licensing agreements with Celera for any resulting commercial discoveries. ...
Williams' conclusion points to a persistent 20-30 percent reduction in subsequent scientific research and product development for those genes held by Celera's intellectual property.
"My take-away from this evidence is that – at least in some contexts – intellectual property can have substantial costs in terms of hindering subsequent innovation," said Williams. "The fact that these costs were – in this context – 'large enough to care about' motivates wanting to better understand whether alternative policy tools could be used to achieve a better outcome. It isn't clear that they can, although economists such as Michael Kremer have proposed some ideas on how they might. ..."

Thursday, March 21, 2013

'Inequality Rising and Permanent Over Past Two Decades'

A new Brookings paper finds that most of the increase in inequality in recent decades is permanent:

Inequality Rising and Permanent Over Past Two Decades: In “Rising Inequality: Transitory or Permanent: New Evidence from a Panel of U.S. Tax Returns” (PDF), Vasia Panousi and Ivan Vidangos of the Federal Reserve Board, Shanti Ramnath of the U.S. Treasury Department, Jason DeBacker of Middle Tennessee State and Bradley Heim of Indiana University use new data to closely examine inequality, finding an increase in “permanent inequality” -- the advantaged becoming permanently better-off, while the disadvantaged becoming permanently worse-off. ...

[Listen to BPEA Co-editor Justin Wolfers discuss this paper: Inequality Rising and Permanent Over Past Two Decades (1:55)]

Using a large panel of income data from U.S. federal tax returns for the period 1987-2009, the authors show that for men’s labor earnings, the increase in inequality was entirely permanent (100 percent), while for total household income, roughly three-quarters of the increase in inequality was permanent. They estimate that the permanent variance for men’s earnings roughly doubled in the 20 years between 1987 and 2009, while the permanent variance of total household income increased by about 50 percent over the same period.

Looking at the impact of tax policy on inequality, the paper finds that although the U.S. federal tax system is indeed progressive in that it has provided some help in mitigating the increase in income inequality over the sample period, it has, however, not significantly altered the broadly increasing inequality trend. All told, the results suggest that rising income inequality will likely lead to greater disparity in families’ well-being and reduce social welfare in the long-run.

“The distinction between permanent and transitory inequality is important for various reasons. First, it is useful in evaluating the proposed explanations for the documented increase in annual cross-sectional inequality. For example, if rising inequality reflects solely an increase in permanent inequality, then consistent explanations would include, for example, skill-biased technical change or long-lasting changes in firms’ compensation policies. By contrast, an increase in transitory inequality could reflect increases in income mobility, driven perhaps by greater flexibility among workers to switch jobs. Second, the distinction is useful because it informs the welfare evaluation of cross-sectional inequality increases. Specifically, lifetime income captures long-term available resources, and hence an increase in permanent inequality would reduce welfare according to most social welfare functions. By contrast, increasing transitory inequality would have less of an effect on welfare, especially in the absence of liquidity constraints restricting consumption smoothing,” they write.

There's been a big debate/fight over whether the increase in inequality is mostly due to technological change or to changes in the rules of the game (e.g. institutional and political changes that helped the demise of unions). Above, this is captured as "skill-biased technical change or long-lasting changes in firms' compensation policies." I think both are at work, and that there are interactions between the two explanations (technological change that allows production to be moved in fact or in threat to other countries, for example, works against unions and changes the balance of political power, and that can lead to changes in the laws, rules, and regulations that unions need to be effective).

In any case, changing this long-run trend is one of the most important social issues that we face.

Ed Leamer Argues the Unemployment Problem is Mostly Structural (I Disagree, and Policy Can Help in Any Case)

I disagree with the claim that our unemployment problem is mostly structural (and hence there is nothing that monetary or fiscal policy can do about it) and I've presented lots of evidence supporting the view that there is a large cyclical component (e.g., latest). But not everyone agrees. Here's Ed Leamer arguing for the structural interpretation:

Here's how I see it. As I said, I am convinced by the evidence showing there's a large cyclical component to the unemployment problem, but I could be wrong (and so could Leamer and others -- I think they are). So which is the bigger error, not helping struggling households who could be helped, or trying to extend a hand when it's not going to do much good? I'd rather make the mistake of trying to help when it isn't necessary instead of leaving people stranded when help is possible.

But suppose unemployment is, in fact, mostly a structural problem. If we could help to overcome the "slow uptake" problem after recessions by (1) providing public employment that bridges the gap until private sector jobs are available, (2) keeping people connected to the labor market and reducing the likelihood they'll drop out, go on disability, or choose some other socially costly alternative, (3) enhancing our long-run growth prospects, (4) saving ourselves money in the long-run, and (5) accomplish this with policies directed at "supply-side" problems that help with demand at the same time, shouldn't we do it?

Infrastructure spending has these features. We can delay basic maintenance for awhile much as a household or business can defer maintenance on a car or delivery vehicles, but there comes a point when the failure to do basic maintenance will cost us even more in the long-run (change your oil now, or change your engine later). We have delayed investment in infrastructure long-enough, and it's time to put people to work rebuilding for the future. These policies don't depend upon whether its cyclical or structural, we have the need -- the benefits exceed the costs (which are unusually low due to the recession) -- and there are millions of people who want to work, but cannot find employment. Why not put them to work doing something productive?

Sunday, February 17, 2013

We Must Make the New Machines

Harvard's Ricardo Hausmann is interviewed in the MIT Technology Review:

You Must Make the New Machines, by Antonio Regalado: ...Why has the number of American manufacturing jobs been decreasing so quickly?
The fundamental reason is that productivity in manufacturing has been rising rapidly and demand for manufactured products has been growing more slowly. To supply the stuff that people want requires fewer jobs.
And then, manufacturing is becoming feasible in more parts of the world. There is more competition, including from countries with much lower wages. As they emulate American production, they take market share.
What’s the best manufacturing strategy for the U.S. in that situation?
It’s certainly not playing defense and trying to save jobs. The U.S. has very, very high wages compared to other countries. Yet it also has a comparative advantage, which is deep knowledge, high R&D intensity, and the best science and technology base in the world.
The step that makes the most sense for the U.S. is to become the producer of the machinery that will power the next global manufacturing revolution. That is where the most complex and sophisticated products are, and that is the work that can pay higher wages.
What kind of revolution are you talking about?
My guess is that developments around information technology, 3-D printing, and networks will allow for a redesign of manufacturing. The world will be massively investing in it. The U.S. is well positioned to be the source of those machines. It can only be rivaled by Germany and Japan. ...
The U.S. ... should look to ... pharmaceuticals, chemicals, and machinery. It’s very hard to get into those. Very few countries are in that game. ...
If you look broadly at the U.S...., the country is super-competitive at agriculture and the industries that support it, like farm machinery, agrochemicals, and genetically modified seeds. It is strong in aerospace with Boeing, GE, Northrop Grumman, and Pratt & Whitney. It is a leader in pharmaceuticals and medical equipment, and it is the clear leader in information technology and the Internet. New industries often arise from the combination of capabilities...
How well is the U.S. doing in staying competitive?
For a while now, the U.S. has been much less focused on being competitive than most other places are. Americans have the feeling they are born to win, and if they don’t, someone else is cheating. The U.S. has many self-inflicted wounds. It has an infrastructure that’s increasingly lousy and a corporate tax rate higher than most countries’. But the most important [problem] is immigration policy. It’s been a real disaster by preventing the attraction and retention of the high-skilled people who come here to study and then don’t stay.