Category Archive for: Macroeconomics [Return to Main]

Wednesday, April 06, 2016

How Network Effects Hurt Economies

“Networks and the Macroeconomy: An Empirical Exploration,” by Daron Acemoglu, Ufuk Akcigit, and William Kerr (this will be published in the NBER Macroeconomics Annual):

How Network Effects Hurt Economies, by Peter Dizikes, MIT News Office: When large-scale economic struggles hit a region, a country, or even a continent, the explanations tend to be big in nature as well.
Macroeconomists — who study large economic phenomena — often look for sweeping explanations of what has gone wrong, such as declines in productivity, consumer demand, or investor confidence, or significant changes in monetary policy.
But what if large-scale economic slumps can be traced to declines in relatively narrow industrial sectors? A newly published study co-authored by an MIT economist provides evidence that economic problems may often have smaller points of origin and then spread as part of a network effect.
“Relatively small shocks can become magnified and then become shocks you have to contend with [on a large scale],” says MIT economist Daron Acemoglu, one of the authors of a paper detailing the research.
The findings run counter to “real business cycle theory,” which became popular in the 1970s and holds that smaller, industry-specific effects tend to get swamped by larger, economy-wide trends.
More precisely, Acemoglu and his colleagues have found cases where industry-specific problems lead to six-fold declines in production across the U.S. economy as a whole. For example, for every dollar of value-added growth lost in the manufacturing industries because of competition from China, six dollars of value-added growth were lost in the U.S. economy as a whole.
The researchers also examined four different types of economic shocks to the U.S. economy that occurred over the years 1991-2009, and quantified the extent to which those problems spread “upstream” or “downstream” of the central industry in question — that is, whether the network effects more strongly hurt industrial suppliers or businesses that sell products and provide services to consumers.
All told, the researchers state in the paper, “Our results suggest that the transmission of various different types of shocks through economic networks and industry interlinkages could have first-order implications for the macroeconomy.” ...
Upstream or downstream
Acemoglu, Afcigit, and Kerr used manufacturing data from the National Bureau of Economic Analysis, and industry-specific data from the Bureau of Economic Analysis, to examine four economic shocks hitting the U.S. economy during that 1991-2009 period. Those were: the impact of export competition on U.S. manufacturing; changes in federal government spending, which affect areas such as defense manufacturing; changes in Total Factor Productivity; and variation in levels of patents coming from foreign industry.
As noted, the network effect of manufacturing competition with China made the overall economic shock about six times as great as it was to manufacturing alone. (This research built on previously published work by economists David Autor of MIT, David Dorn of the University of Zurich, and Gordon Hanson of the University of California at San Diego, sometimes in collaboration with Acemoglu and MIT graduate student Brendan Price.)
In studying changes in the levels of federal spending after 1992, the researchers found a network effect about three to five times as large as that on directly-affected firms alone.
The decline in Total Factor Productivity constituted a smaller economic shock but one with a larger network effect, of more than 15 times the initial impact. In the case of increased foreign patenting (another way of looking at corporate productivity), the researchers found a network effect similar to that of Total Factor Productivity.
The first two of these areas constitute demand-side shocks, affecting consumer demand for the products in question. The last two are supply-side shocks, affecting firms’ ability to be good at what they do.
One of the key findings of the study, which confirms and builds on existing theory, is that demand-side shocks spread almost exclusively “upstream” in economic networks, and supply-side shocks spread almost exclusively “downstream.” To see why, Acemoglu suggests, consider an auto manufacturer, which has parts suppliers upstream and is linked with auto dealers, repair shops, and other businesses downstream.
When auto demand drops, “It’s the suppliers [upstream] that get affected,” Acemoglu explains. “You’re going to cut the production of autos, and you buy less of each of the inputs,” or supplies.
Now suppose the supply changes, perhaps due to an increase in manufacturing efficiency, which makes cars cheaper. In that case, Acemoglu adds, “People who use auto as inputs will buy more of them” — picture a delivery company — “so that shock will get transmitted to the downstream industries.”
To be sure, it is widely understood that the auto industry, like almost every other industry, is situated within a larger economic network. Yet estimating the spillover effects of struggles within any given industry, in the quantitative form of the current study, is rarely done.
“Given the importance of this, it’s surprising how scant the evidence is,” Acemoglu says. ...
This could have policy implications: Proponents of government investment, such as the so-called stimulus bill of 2009, the American Recovery and Reinvestment Act, have contended that government spending creates a “multiplier effect” in terms of growth. Opponents of such legislation sometimes assert that government spending crowds out private investment and thus does not generate more growth than would otherwise occur. In theory, a more granular understanding of these network effects could help describe and define what a multiplier effect is, and in which industrial areas it may be the most pronounced. ...

Saturday, March 26, 2016

Reflections on Macroeconomics Then and Now

Stanley Fischer:

Reflections on Macroeconomics Then and Now: I am grateful to the National Association for Business Economics (NABE) for conferring the fourth annual NABE Paul A. Volcker Lifetime Achievement Award for Economic Policy on me, thereby allowing me the honor of following in the footsteps of Paul Volcker, Jean-Claude Trichet, and Alice Rivlin.1 The honor of receiving the award is enhanced by its bearing the name of Paul Volcker, a model citizen and public servant, and a giant in every sense among central bankers.

One thinks of many things on an occasion such as this one. My mind goes back first to growing up in a very small town in Zambia, then Northern Rhodesia, and to the surprise and delight my parents would have felt at seeing me standing where I am now. They would have been even more delighted that my girlfriend, Rhoda, whom I met when my parents moved to a bigger town in Zimbabwe, and I have been happily married for 50 years. But that is not the story I will tell today. Rather, I want to talk about our field, macroeconomics, and some of the lessons we have learned in the course of the last 55 years--and I say 55 years, because in 1961, at the end of my school years, on the advice of a friend, I read Keynes's General Theory for the first time.

Did I understand it? Certainly not. Was I captivated by it? Certainly, though "captured" is a more appropriate word than "captivated." Does it remain relevant? Certainly. Just a week ago I took it off the bookshelf to read parts of chapter 23, "Notes on Mercantilism, the Usury Laws, Stamped Money and Theories of Under-Consumption." Today that chapter would be headed "Protectionism, the Zero Lower Bound, and Secular Stagnation," with the importance of usury laws having diminished since 1936.

There is an old joke about our field--not the one about the one-handed economist, nor the one about "assume you have a can opener," nor the one that ends, "If I were you, I wouldn't start from here." Rather it's the one about the Ph.D. economist who returns to his university for his class's 50th reunion. He asks if he can see the most recent Ph.D. generals exam. After a while it is brought to him. He reads it carefully, looking perplexed, and then says, "But this is exactly the same as the exam I wrote over 50 years ago." "Ah yes," says the professor. "It is the same, but all the answers are different."

Is that really the case? Not really, though it is true to some extent in the realm of policy. To discuss the question of whether the answers to the questions of how to deal with macroeconomic policy problems have changed markedly over the past half-century or so, I will start by briefly sketching the structure of a basic macro model. The building blocks of this model are similar to those used in many macro models, including FRB/US, the Fed staff's large-scale model, and a variety of DSGE (dynamic stochastic general equilibrium) models used at the Fed and other central banks and by academic researchers.

The structure of the model starts with the standard textbook equation for aggregate demand for domestically produced goods, namely:2

  1. AD = C + I + G + NX;
  2. Next is the wage-price block, which is based on a wage or price Phillips curve. Okun's law is included to make the transition between output and employment;
  3. Monetary policy is described by a money supply or interest rate rule;
  4. The credit markets and financial intermediation are built off links between the policy interest rate and the rates of return on, and/or demand and supply functions for, other assets;
  5. The balance of payments and the exchange rate enter through the balance of payments identity, namely that the current account surplus must be equal to the capital account deficit, corrected for official intervention;
  6. Dynamics of stocks: There are dynamic equations for the capital stock, the stock of government debt, and the external debt.

When I was an undergraduate at the London School of Economics (LSE) between 1962 and 1965, we learned the IS-LM model, which combined the aggregate demand equation (1) with the money market equilibrium condition set out in (3). That was the basic understanding of the Keynesian model as crystallized by John Hicks, Franco Modigliani, and others, in which it was easy to add detail to the demand functions for private-sector consumption, C; for investment, I; for government spending, G; and for net exports. The Keynesian emphasis on aggregate demand and its determinants is one of the basic innovations of the Keynesian revolution, and one that makes it far easier to understand and explain what factors are determining output and employment.

Continuing down the list, on price and wage dynamics, the Phillips curve has flattened somewhat since the 1950s and 1960s.3 Further, the role of expectations of inflation in the Phillips curve has been developed far beyond what was understood when A.W. Phillips--who was a New Zealander, an LSE faculty member, and a statistician and former engineer--discovered what later became the Phillips curve. The difference between the short- and long-run Phillips curves, which is now a staple of textbooks, was developed by Milton Friedman and Edmund Phelps, and the effect of making expectations rational or model consistent was emphasized by Robert Lucas, whose islands model provided an imperfect information reason for a nonvertical short-run Phillips curve. In Okun's law, the Okun coefficient--the coefficient specifying how much a change in the unemployment rate affects output--appears to have declined over time. So has the trend rate of productivity growth, which is a critical determinant of future levels of per capita income.

In (3), the monetary equilibrium condition, the monetary policy decision was typically represented by the money stock at the LSE and perhaps also at the Massachusetts Institute of Technology (MIT) after the Keynesian revolution (after all, "L" represents the liquidity preference function and "M" the supply of money); now the money supply rule is replaced by an interest-rate setting rule, for instance a reaction function of some form, or by a calculated "optimal" policy based on a loss function.

The development of the flexible inflation-targeting approach to monetary policy is one of the major achievements of modern macroeconomics. Flexible inflation targeting allows for flexibility in the speed with which the monetary authority plans on returning to the target inflation rate, and is thereby close to the dual mandate that the law assigns to the Fed.

A great deal of progress has been made in developing the credit and financial intermediation block. As early as the 1960s, each of James Tobin, Milton Friedman, and Karl Brunner and Alan Meltzer wrote out models with more fully explicated financial sectors, based on demand functions for assets other than money. Later the demand functions were often replaced by pricing equations derived from the capital asset pricing model. Researchers at the Fed have been bold enough to add estimated term and risk premiums to the determination of the returns on some assets.4 They have concluded, inter alia, that the arguments we used to make about how easy it would be to measure expected inflation if the government would introduce inflation-indexed bonds failed to take into account that returns on bonds are affected by liquidity and risk premiums. This means that one of the major benefits that were expected from the introduction of inflation-indexed bonds (Treasury Inflation-Protected Securities, generally called TIPS), namely that they would provide a quick and reliable measure of inflation expectations, has not been borne out, and that we still have to struggle to get reasonable estimates of expected inflation.

As students, we included NX, net exports, in the aggregate demand equation, but we did not generally solve for the exchange rate, possibly because the exchange rate was typically fixed. Later, in 1976, Rudi Dornbusch inaugurated modern international macroeconomics--and here I'm quoting from a speech by Ken Rogoff--in his famous overshooting model.5 As globalization of both goods and asset markets intensified over the next 40 years, the international aspects of trade in goods and assets occupied an increasingly important role in the economies of virtually all countries, not least the United States, and in macroeconomics.

At the LSE, we took a course on the British economy from Frank Paish, whose lectures consisted of a series of charts, accompanied by narrative from the professor. He made a strong impression on me in a lecture in 1963, in which he said, "You see, it (the balance of payments deficit) goes up and it goes down, and it is clear that we are moving toward a balance of payments crisis in 1964." I waited and I watched, and the crisis appeared on schedule, as predicted. But Paish also warned us that forecasting was difficult, and gave us the advice "Never look back at your forecasts--you may lose your nerve." I pass that wisdom on to those of you who need it.

I remember also my excitement at being told by a friend in a more senior class about the existence of econometric models of the entire economy. It was a wonderful moment. I understood that economic policy would from then on be easy: All that was necessary was to feed the data into the model and work out at what level to set the policy parameters. Unfortunately, it hasn't worked out that way. On the use of econometric models, I think often of something Paul Samuelson once said: "I'd rather have Bob Solow's views than the predictions of a model. But I'd rather have Solow with a model than without one."

We learned a lot at the LSE. But wonderful as it was to be in London, and to meet people from all over the world for the first time, and to be able to travel to Europe and even to the Soviet Union with a student group, and to ski for the first time in my life in Austria, it gradually became clear to me that the center of the academic economics profession was not in London or Oxford or Cambridge, but in the United States.

There was then the delicate business of applying to graduate school. There was a strong Chicago tendency among many of the lecturers at the LSE, but I wanted to go to MIT. When asked why, I gave a simple answer: "Samuelson and Solow." Fortunately, I got into MIT and had the opportunity of getting to know Samuelson and Solow and other great professors. And I also met the many outstanding students who were there at the time, among them Robert Merton. I took courses from Samuelson and Solow and other MIT stars, and I wrote my thesis under the guidance of Paul Samuelson and Frank Fisher. From there, my first job was at the University of Chicago--and I understood that I was very lucky to have been able to learn from the great economists at both MIT and Chicago. Among the many things I learned at Chicago was a Milton Friedman saying: "Man may not be rational, but he's a great rationalizer," which is a quote that often comes to mind when listening to stock market analysts.

After four years at Chicago, I returned to the MIT Department of Economics, and thought that I would never leave--even more so when MIT succeeded in persuading Rudi Dornbusch, whom I had met when he was a student at Chicago, to move to MIT--thus giving him too the benefit of having learned his economics at both Chicago and MIT, and giving MIT the pleasure and benefit of having added a superb economist and human being to the collection of such people already present.

MIT was still heavily involved in developing growth theory at the time I was a Ph.D. student there, from 1966 to 1969. We students were made aware of Kaldor's stylized facts about the process of growth, presented in his 1957 article "A Model of Economic Growth." They were:

  1. The shares of national income received by labor and capital are roughly constant over long periods of time.
  2. The rate of growth of the capital stock per worker is roughly constant over long periods of time.
  3. The rate of growth of output per worker is roughly constant over long periods of time.
  4. The capital/output ratio is roughly constant over long periods of time.
  5. The rate of return on investment is roughly constant over long periods of time.
  6. The real wage grows over time.

Well, that was then, and many of the problems we face in our economy now relate to the changes in the stylized facts about the behavior of the economy: Every one of Kaldor's stylized facts is no longer true, and unfortunately the changes are mostly in a direction that complicates the formulation of economic policy.6

While the basic approach outlined so far remains valid, and can be used to address many macroeconomic policy issues, I would like briefly to take up several topics in more detail. Some of them are issues that have remained central to the macroeconomic agenda over the past 50 years, some have to my regret fallen off the agenda, and others are new to the agenda.

  1. Inflation and unemployment: Estimated Phillips curves appear to be flatter than they were estimated to be many years ago--in terms of the textbooks, Phillips curves appear to be closer to what used to be called the Keynesian case (flat Phillips curve) than to the classical case (vertical Phillips curve). Since the U.S. economy is now below our 2 percent inflation target, and since unemployment is in the vicinity of full employment, it is sometimes argued that the link between unemployment and inflation must have been broken. I don't believe that. Rather the link has never been very strong, but it exists, and we may well at present be seeing the first stirrings of an increase in the inflation rate--something that we would like to happen.
  2. Productivity and growth: The rate of productivity growth in the United States and in much of the world has fallen dramatically in the past 20 years. The table shows calculated rates of annual productivity growth for the United States over three periods: 1952 to 1973; 1974 to 2007; and the most recent period, 2008 to 2015. After having been 3 percent and 2.1 percent in the first two periods, the annual rate of productivity growth has fallen to 1.2 percent in the period since the start of the global financial crisis.

    The right guide to thinking in this case is given by a famous Herbert Stein line: "The difference between a growth rate of 1 percent and 2 percent is 100 percent." Why? Productivity growth is a major determinant of long-term growth. At a 1 percent growth rate, it takes income 70 years to double. At a 2 percent growth rate, it takes 35 years to double. That is to say, that with a growth rate of 1 percent per capita, it takes two generations for per capita income to double; at a 2 percent per capita growth rate, it takes one generation for per capita income to double. That is a massive difference, one that would very likely have severe consequences for the national mood, and possibly for economic policy. That is to say, there are few issues more important for the future of our economy, and those of every other country, than the rate of productivity growth.

    At this stage, we simply do not know what will happen to productivity growth. Robert Gordon of Northwestern University has just published an extremely interesting and pessimistic book that argues we will have to accept the fact that productivity will not grow in future at anything like the rates of the period before 1973. Others look around and see impressive changes in technology and cannot believe that productivity growth will not move back closer to the higher levels of yesteryear.7 A great deal of work is taking place to evaluate the data, but so far there is little evidence that data difficulties account for a significant part of the decline in productivity growth as calculated by the Bureau of Labor Statistics.8

  3. The ZLB and the effectiveness of monetary policy: From December 2008 to December 2015, the federal funds rate target set by the Fed was a range of 0 to 1/4 percent, a range of rates that was described as the ZLB (zero lower bound).9 Between December 2008 and December 2014, the Fed engaged in QE--quantitative easing--through a variety of programs. Empirical work done at the Fed and elsewhere suggests that QE worked in the sense that it reduced interest rates other than the federal funds rate, and particularly seems to have succeeded in driving down longer-term rates, which are the rates most relevant to spending decisions.

    Critics have argued that QE has gradually become less effective over the years, and should no longer be used. It is extremely difficult to appraise the effectiveness of a program all of whose parameters have been announced at the beginning of the program. But I regard it as significant with respect to the effectiveness of QE that the taper tantrum in 2013, apparently caused by a belief that the Fed was going to wind down its purchases sooner than expected, had a major effect on interest rates.

    More recently, critics have argued that QE, together with negative interest rates, is no longer effective in either Japan or in the euro zone. That case has not yet been empirically established, and I believe that central banks still have the capacity through QE and other measures to run expansionary monetary policies, even at the zero lower bound.

  4. The monetary-fiscal policy mix: There was once a great deal of work on the optimal monetary-fiscal policy mix. The topic was interesting and the analysis persuasive. Nonetheless the subject seems to be disappearing from the public dialogue; perhaps in ascendance is the notion that--except in extremis, as in 2009--activist fiscal policy should not be used at all. Certainly, it is easier for a central bank to change its policies than for a Treasury or Finance Ministry to do so, but it remains a pity that the fiscal lever seems to have been disabled.
  5. The financial sector: Carmen Reinhart and Ken Rogoff's book, This Time Is Different, must have been written largely before the start of the great financial crisis. I find their evidence that a recession accompanied by a financial crisis is likely to be much more serious than an ordinary recession persuasive, but the point remains contentious. Even in the case of the Great Recession, it is possible that the U.S. recession got a second wind when the euro-zone crisis worsened in 2011. But no one should forget the immensity of the financial crisis that the U.S. economy and the world went through following the bankruptcy of Lehman Brothers--and no one should forget that such things could happen again.

    The subsequent tightening of the financial regulatory system under the Dodd-Frank Act was essential, and the complaints about excessive regulation and excessive demands for banks to hold capital betray at best a very short memory. We, the official sector and particularly the regulatory authorities, do have an obligation to try to minimize the regulatory and other burdens placed on the private sector by the official sector--but we have a no less important obligation to try to prevent another financial crisis. And we should also remember that the shadow banking system played an important role in the propagation of the financial crisis, and endeavor to reduce the riskiness of that system.
  6. The economy and the price of oil: For some time, at least since the United States became an oil importer, it has been believed that a low price of oil is good for the economy. So when the price of oil began its descent below $100 a barrel, we kept looking for an oil-price-cut dividend. But that dividend has been hard to discern in the macroeconomic data. Part of the reason is that as a result of the rapid expansion of the production of oil from shale, total U.S. oil production had risen rapidly, and so a larger part of the economy was adversely affected by the decline in the price of oil. Another part is that investment in the equipment and structures needed for shale oil production had become an important component of aggregate U.S. investment, and that component began a rapid decline. For these reasons, although the United States has remained an oil importer, the decrease in the world price of oil had a mixed effect on U.S. gross domestic product. There is reason to believe that when the price of oil stabilizes, and U.S. shale oil production reaches its new equilibrium, the overall effect of the decline in the price of oil will be seen to have had a positive effect on aggregate demand in the United States, since lower energy prices are providing a noticeable boost to the real incomes of households.
  7. Secular stagnation: During World War II in the United States, many economists feared that at the end of the war, the economy would return to high pre-war levels of unemployment--because with the end of the war, demobilization, and the massive reduction that would take place in the defense budget, there would not be enough demand to maintain full employment.

    Thus was born or renewed the concept of secular stagnation--the view that the economy could find itself permanently in a situation of low demand, less than full employment, and low growth.10 That is not what happened after World War II, and the thought of secular stagnation was correspondingly laid aside, in part because of the growing confidence that intelligent economic policies--fiscal and monetary--could be relied on to help keep the economy at full employment with a reasonable growth rate.

    Recently, Larry Summers has forcefully restated the secular stagnation hypothesis, and argued that it accounts for the current slowness of economic growth in the United States and the rest of the industrialized world. The theoretical case for secular stagnation in the sense of a shortage of demand is tied to the question of the level of the interest rate that would be needed to generate a situation of full employment. If the equilibrium interest rate is negative, or very small, the economy is likely to find itself growing slowly, and frequently encountering the zero lower bound on the interest rate.

    Research has shown a declining trend in estimates of the equilibrium interest rate. That finding has become more firmly established since the start of the Great Recession and the global financial crisis.11 Moreover, the level of the equilibrium interest rate seems likely to rise only gradually to a longer-run level that would still be quite low by historical standards.

    What factors determine the equilibrium interest rate? Fundamentally, the balance of saving and investment demands. Several trends have been cited as possible factors contributing to a decline in the long-run equilibrium real rate. One likely factor is persistent weakness in aggregate demand. Among the many reasons for that, as Larry Summers has noted, is that the amount of physical capital that the revolutionary information technology firms with high stock market valuations have needed is remarkably small. The slowdown of productivity growth, which as already mentioned has been a prominent and deeply concerning feature of the past six years, is another important factor.12 Others have pointed to demographic trends resulting in there being a larger share of the population in age cohorts with high saving rates.13 Some have also pointed to high saving rates in many emerging market countries, coupled with a lack of suitable domestic investment opportunities in those countries, as putting downward pressure on rates in advanced economies--the global savings glut hypothesis advanced by Ben Bernanke and others at the Fed about a decade ago.14

    Whatever the cause, other things being equal, a lower level of the long-run equilibrium real rate suggests that the frequency and duration of future episodes in which monetary policy is constrained by the ZLB will be higher than in the past. Prior to the crisis, some research suggested that such episodes were likely to be relatively infrequent and generally short lived.15 The past several years certainly require us to reconsider that basic assumption. Moreover, recent experience in the United States and other countries has taught us that conducting monetary policy at the effective lower bound is challenging.16 And while unconventional policy tools such as forward guidance and asset purchases have been extremely helpful and effective, all central banks would prefer a situation with positive interest rates, restoring their ability to use the more traditional interest rate tool of monetary policy.17

    The answer to the question "Will the equilibrium interest rate remain at today's low levels permanently?" is also that we do not know. Many of the factors that determine the equilibrium interest rate, particularly productivity growth, are extremely difficult to forecast. At present, it looks likely that the equilibrium interest rate will remain low for the policy-relevant future, but there have in the past been both long swings and short-term changes in what can be thought of as equilibrium real rates.

    Eventually, history will give us the answer. But it is critical to emphasize that history's answer will depend also on future policies, monetary and other, notably including fiscal policy.

Concluding Remarks
Well, are the answers all different than they were 50 years ago? No. The basic framework we learned a half-century ago remains extremely useful. But also yes: Some of the answers are different because they were not on previous exams because the problems they deal with were not evident fifty years ago. So the advice to potential policymakers is simple: Learn as much as you can, for most of it will come in useful at some stage of your career; but never forget that identifying what is happening in the economy is essential to your ability to do your job, and for that you need to keep your eyes, your ears, and your mind open, and with regard to your mouth--to use it with caution.

Many thanks again for this award and this opportunity to speak with you.

References
Bernanke, Ben S. (2005). "The Global Saving Glut and the U.S. Current Account Deficit," speech delivered at the Homer Jones Lecture, St. Louis, April 14.

Blanchard, Olivier (2014). "Where Danger Lurks: The Recent Financial Crisis Has Taught Us to Pay Attention to Dark Corners, Where the Economy Can Malfunction Badly," Finance and Development, vol. 51 (September), pp. 28-31.

-------- (2016). "The U.S. Phillips Curve: Back to the 60s? (PDF)" Policy Brief 16-1. Washington: Peterson Institute for International Economics, January.

Blanchard, Olivier, Eugenio Cerutti, and Lawrence Summers (2015). "Inflation and Activity--Two Explorations and Their Monetary Policy Implications (PDF)," IMF Working Paper WP/15/230. Washington: International Monetary Fund, November.

Blanchard, Olivier, and John Simon (2001). "The Long and Large Decline in U.S. Output Volatility (PDF)," Brookings Papers on Economic Activity, 1, pp. 135-74.

Brunner, Karl, and Allan H. Meltzer (1972). "Money, Debt, and Economic Activity," Journal of Political Economy, vol. 80 (September-October), pp.951-77.

Byrne, David M., John G. Fernald, and Marshall Reinsdorf (forthcoming). "Does the United States Have a Productivity Problem or a Measurement Problem?" Brookings Papers on Economic Activity.

Caballero, Ricardo J., Emmanuel Farhi, and Pierre-Olivier Gourinchas (2008). "An Equilibrium Model of 'Global Imbalances' and Low Interest Rates," American Economic Review, vol. 98 (1), pp. 358-93.

Daly, Mary C., John G. Fernald, Òscar Jordà, and Fernanda Nechio (2014). "Output and Unemployment Dynamics (PDF)," Working Paper Series 2013-32. San Francisco: Federal Reserve Bank of San Francisco, November.

-------- (2014). "Interpreting Deviations from Okun's Law," FRBSF Economic Letter 2014-12. San Francisco: Federal Reserve Bank of San Francisco.

D'Amico, Stefania, Don H. Kim, and Min Wei (2014). "Tips from TIPS: The Informational Content of Treasury Inflation-Protected Security Prices (PDF)," Finance and Economics Discussion Series 2014-24. Washington: Board of Governors of the Federal Reserve System, January.

Dornbusch, Rudiger (1976). "Expectations and Exchange Rate Dynamics," Journal of Political Economy, vol. 84 (December), pp. 1161-76.

Dornbusch, Rudiger, Stanley Fischer, and Richard Startz (2014). Macroeconomics, 12th ed. New York: McGraw-Hill Education.

Fischer, Stanley (forthcoming). "Monetary Policy, Financial Stability, and the Zero Lower Bound," American Economic Review (Papers and Proceedings).

Friedman, Milton (1968). "The Role of Monetary Policy," American Economic Review, vol. 58 (March), pp. 1-17.

Gordon, Robert J. (2014). "The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections," NBER Working Paper Series 19895. Cambridge, Mass.: National Bureau of Economic Research, February.

-------- (2016). The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War. Princeton, N.J.: Princeton University Press.

Hall, Robert E. (2014). "Quantifying the Lasting Harm to the U.S. Economy from the Financial Crisis," in Jonathan Parker and Michael Woodford, eds., NBER Macroeconomics Annual 2014, vol. 29. Chicago: University of Chicago Press.

Hamilton, James D., Ethan S. Harris, Jan Hatzius, and Kenneth D. West (2015). "The Equilibrium Real Funds Rate: Past, Present and Future," NBER Working Paper Series 21476. Cambridge, Mass.: National Bureau of Economic Research, August.

Hicks, John R. (1937). "Mr. Keynes and the 'Classics': A Suggested Interpretation," Econometrica, vol. 5 (April), pp. 147-59.

Johannsen, Benjamin K., and Elmar Mertens (2016). "The Expected Real Interest Rate in the Long Run: Time Series Evidence with the Effective Lower Bound," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, February 9.

Jones, Charles I., and Paul M. Romer (2010). "The New Kaldor Facts: Ideas, Institutions, Population, and Human Capital," American Economic Journal: Macroeconomics, vol. 2 (January), pp. 224-45.

Kaldor, Nicholas (1957). "A Model of Economic Growth," Economic Journal, vol. 67 (December), pp. 591-624.

Keynes, John Maynard (1936). The General Theory of Employment, Interest and Money. London: Macmillan.

Kiley, Michael T. (2015). "What Can the Data Tell Us about the Equilibrium Real Interest Rate? (PDF)" Finance and Economics Discussion Series 2015-077. Washington: Board of Governors of the Federal Reserve System, August.

Knotek, Edward S., II (2007). "How Useful Is Okun's Law? (PDF)" Federal Reserve Bank of Kansas City, Economic Review, Fourth Quarter, pp. 73-103.

Laubach, Thomas, and John C. Williams (2003). "Measuring the Natural Rate of Interest," Review of Economics and Statistics, vol. 85 (November), pp. 1063-70.

Lucas, Robert E., Jr. (1972). "Expectations and the Neutrality of Money," Journal of Economic Theory, vol. 4 (April), pp. 103-24.

Mendoza, Enrique G., Vincenzo Quadrini, and José-Víctor Ríos-Rull (2009). "Financial Integration, Financial Development, and Global Imbalances," Journal of Political Economy, vol. 117 (3), pp. 371-416.

Modigliani, Franco (1944). "Liquidity Preference and the Theory of Interest and Money," Econometrica, vol. 12 (January), pp. 45-88.

Mokyr, Joel, Chris Vickers, and Nicolas L. Ziebarth (2015). "The History of Techonological Anxiety and the Future of Economic Growth: Is This Time Different?" Journal of Economic Perspectives, vol. 29 (Summer), pp. 31-50.

Obstfeld, Maurice, and Kenneth Rogoff (1996). Foundations of International Macroeconomics. Cambridge, Mass.: MIT Press.

Okun, Arthur M. (1962). "Potential GNP: Its Measurement and Significance," Proceedings of the Business and Economics Statistics Section of the American Statistical Association, pp. 98-104.

Phelps, Edmund S. (1967). "Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time," Economica, vol. 34 (August), pp. 254-81.

Reifschneider, David, and John C. Williams (2000). "Three Lessons for Monetary Policy in a Low-Inflation Era," Journal of Money, Credit, and Banking, vol. 32 (November), pp. 936-66.

Reinhart, Carmen M., and Kenneth S. Rogoff (2009). This Time Is Different: Eight Centuries of Financial Folly. Princeton, N.J.: Princeton University Press.

Rogoff, Kenneth (2001). "Dornbusch's Overshooting Model after Twenty-Five Years (PDF)," speech delivered at the Mundell-Fleming Lecture, Second Annual Research Conference, International Monetary Fund, Washington, November 30 (revised January 22, 2002).

Solow, Robert M. (2004). "Introduction: The Tobin Approach to Monetary Economics," Journal of Money, Credit, and Banking, vol. 36 (August), pp. 657-63.

Stock, James H., and Mark W. Watson (2003). "Has the Business Cycle Changed and Why?" NBER Macroeconomics Annual 2002, vol. 17 (January).

Tobin, James (1969). "A General Equilibrium Approach to Monetary Theory (PDF)," Journal of Money, Credit, and Banking, vol. 1 (February), pp. 15-29.

U.S. Executive Office of the President, Council of Economic Advisors (2015). Long-Term Interest Rates: A Survey (PDF). Washington: EOP.

Williams, John C. (2013). "A Defense of Moderation in Monetary Policy (PDF)," Working Paper Series 2013-15. San Francisco: Federal Reserve Bank of San Francisco, July.

Woodford, Michael (2010). "Financial Intermediation and Macroeconomic Analysis," Journal of Economic Perspectives, vol. 24 (Fall), pp. 21-44.


1. I am grateful to David Lopez-Salido, Andrea Ajello, Elmar Mertens, Stacey Tevlin, and Bill English of the Federal Reserve Board for their assistance. Views expressed are mine, and are not necessarily those of the Federal Reserve Board or the Federal Open Market Committee.

2. A fuller description of the equations is contained in the appendix.

3. See Blanchard (2016).

4. See D'Amico, Kim, and Wei (2014).

5. See Dornbusch (1976) and Rogoff (2001).

6. See Jones and Romer (2010).

7. See, for instance, Mokyr, Vickers, and Ziebarth (2015).

8. See Byrne, Fernald, and Reinsdorf (forthcoming).

9. Inside the Fed, the range of 0 to 1/4 percent is generally called the ELB, the effective lower bound.

10. I am distinguishing in this section between secular stagnation as being caused by a deficiency of aggregate demand and another view, that output growth will be very slow in future because productivity growth will be very low. The view that future productivity growth will be very low has already been discussed, with the conclusion that we do not have a good basis for predictions of its future level, and that we simply do not know whether future productivity growth will be extremely low or higher than it has been recently. There is no shortage of views on this issue among economists, but the views to some extent appear to depend on whether the economist making the prediction is an optimist or a pessimist.

11. This research includes recent work by Johannsen and Mertens (2015) and Kiley (2015) that uses extensions of the original Laubach and Williams (2003) framework. An international perspective on medium-to-long-run real interest rates is provided by U.S. Executive Office of the President (2015). Reinhart and Rogoff (2009) and Hall (2014) discuss the long-lived effects of financial crises on economic performance. See also Hamilton and others (2015). I have, in addition, drawn on Fischer (forthcoming).

12. It is also a major factor explaining the phenomenon of the economy's impressive performance on the jobs front during a period of historically slow growth.

13. See, for instance, Gordon (2014, 2016).

14. See Bernanke (2005). See also the recent work by Caballero, Farhi, and Gourinchas (2008); and Mendoza, Quadrini, and Rios-Rull (2009).

15. See, for instance, Reifschneider and Williams (2000), Blanchard and Simon (2001), and Stock and Watson (2003).

16. For a discussion of various issues reviewed by the Federal Open Market Committee in late 2008 and 2009 regarding the complications of unconventional monetary policy at the ZLB, see the set of staff memos on the Board's website.

17. See Williams (2013).

Tuesday, March 22, 2016

'MMT and Mainstream Macro'

Simon Wren-Lewis:

MMT and mainstream macro: There were a lot of interesting and useful comments on my last post on MMT, plus helpful (for me) follow-up conversations. Many thanks to everyone concerned for taking the time. Before I say anything more let me make it clear where I am coming from. I’m on the same page as far as policy’s current obsession with debt is concerned. Where I seem to differ from some who comment on my blog, people who say they are following MMT, is whether you need to be concerned about debt when monetary policy is not constrained by the Zero Lower Bound. I say yes, they say no, but for reasons I could not easily understand.
This was the point of the ‘nothing new’ comment. It was not meant to be a put down. It was meant to suggest that a mainstream economist like myself could come to some of the same conclusions as MMT writers, and more to the point, just because I was a mainstream economist does not mean I misunderstood how government financing works. It was because I was getting comments from MMT followers that seemed nonsensical to me, but which should not have been nonsensical because the basics of MMT are understandable using mainstream theory. ...
What mainstream theory says is that some combination of monetary and fiscal policy can always end a recession caused by demand deficiency. Full stop: no ifs or buts. That is why we had fiscal expansion in 2009 in the US, UK, Germany, China and elsewhere. The contribution of some influential mainstream economists to this switch from fiscal stimulus to austerity in 2010 was minor at most, and to imagine otherwise does nobody any favours. The fact that policymakers went against basic macro theory tells us important things about the transmission mechanism of economic knowledge, which all economists have to address.

Brad DeLong:

Yes, Expansionary Fiscal Policy in the North Atlantic Would Solve Many of Our Problems. Why Do You Ask?: ... In my view, the economics of Abba Lerner—what is now called MMT—is not always right: It is not always possible for the government to spend freely to attain full employment, use monetary policy to keep the debt under control, and rely on rising inflation as the only signal needed of whether and when policy needs to be tightened. Why not? Because it is possible that the bond market can get itself into an unsustainable position, in which underlying inflationary pressures are masked until it is too late to rebalance government finances without a financial crisis.
But, in my view, right now the economics of Abba Lerner is 100% correct. The U.S. (and Europe!) should use expansionary fiscal policy to rebalance the economy at full employment and potential output. And interest rates are so low that doing so does not require any additional monetary policy steps to keep the debt under control.
Japan, alas, confronts us with a difficult and much more devilish program of economic policy. Partial and nearly painless debt repudiation via inflation and financial repression seems to me to be the best way forward—if that can be attained. But more on that anon.

Tuesday, February 02, 2016

Economics is Changing

Not much out there to excerpt and blog, so I threw down a few thoughts for you to tear apart:

I hear frequently that economics needs to change, and it has, at least in the questions we ask. Twenty years go, the dominant conversation in economics was about the wonder of markets. We needed to free the banking system from regulations so it could do its important job of turning saving into productive investment unfettered by government interference. Trade barriers needed to come down to make everyone better off. There was little need to worry about monopoly power, markets are contestable and the problem will take care of itself. Unions simply get in the way of our innovative, dynamic economy and needed to be broken so the market could do its thing and make everyone better off. Inequality was a good thing, it created the right incentives for people to work hard and try to get ahead, and the markets would ensure that everyone, from CEOs on down, would be paid according to their contribution to society. The problem wasn't that the markets somehow distributed goods unfairly, or at least in a way that is at odds with marginal productivity theory, it was that some workers lacked the training to reap higher rewards. We simply needed to prepare people better to compete in modern, global markets, there was nothing fundamentally wrong with markets themselves. The move toward market fundamentalism wasn't limited to Republicans, Democrats joined in too.

That view is changing. Inequality has burst onto the economics research scene. Is rising inequality an inevitable feature of capitalism? Does the system reward people fairly? Can inequality actually inhibit economic growth? Not so long ago, the profession ignored these questions. Similarly for the financial sector. The profession has moved from singing the praises of the financial system and its ability to channel savings into the most productive investments to asking whether the financial sector produces as much value for society as was claimed in the past. We now ask whether banks are too big and powerful, whereas in the past that size was praised as a sign of how super-sized banks can do super-sized things for the economy, and compete with banks around the world. We have gone from saying that the shadow banking system can self regulate as it provides important financial services to homeowners and businesses to asking what types of regulation would be best. Economists used to pretty much ignore the financial sector altogether. It was a black box that simply turned S (saving) into I (investment), and did so efficiently, and there was no need to get into the details. Our modern financial system couldn't crash like those antiquated systems that were around during and before the Great Depression. There was no need to include it in our macro models, at least not in any detail, or even ask questions about what might happen if there was a financial crisis.

There are other changes too. Economists now question whether markets reward labor according to changes in productivity. Why is it that wages have stagnated even as worker productivity has gone up? Is it because bargaining power is asymmetric in labor markets, with firms having the advantage? What's the best way to elevate the working class? In the past, an argument was made that the best way to help everyone is to cut taxes for the wealthy, and all the great things they would do with the extra money and the incentives that tax cuts bring would trickle down and help the working class. That didn't happen and although there are still echoes of this argument on the political right, the questions have certainly changed. Much of the current research agenda in economics is devoted to understanding why wage income has stagnated for most people, and how to fix it. We've moved beyond "technology is the problem and better education is the answer" to asking whether the market system itself, and the market failures that come with it (including political influence over policy), has something to do with this outcome.

Fiscal policy is another example of change within the profession. Twenty years ago, nobody, well hardly anyone, was doing research on the impact of fiscal policy and its use as a countercyclical policy instrument. All of the focus was on monetary policy. Fiscal policy would only be needed in a severe recession, and that wouldn't happen in our modern economy, and in any case it wouldn't work (not everyone believed fiscal policy was ineffective, but many did). That has changed. Fiscal policy is now an integral component of many modern DSGE models, and -- surprise -- the models do not tell us fiscal policy is ineffective. Quite the opposite, it works well in deep recessions (though near full employment its effectiveness wanes).

Monetary policy has also come under scrutiny. In the past, the Taylor rule was praised as responsible for the Great Moderation. We had discovered the key to a stable economy. But the Great Recession changed that. We now wonder if other policy rules might serve as a better guidepost (e.g. nominal GDP targeting), we ask about negative interest rates, unconventional policy, all sorts of questions that were hardly asked or even imagined not so long ago. We wonder about regulation of the financial sector, and how to do it correctly (in the past, it was about how to remove regulations correctly).

I don't mean to suggest that economics is now on the right track. The old guard is still there, and still influential. But it's hard to deny that the questions we are asking have gone through a considerable evolution since the onset of the recession, and when questions change, new models and new tools are developed to answer them. The models do not come first -- models aren't built in search of questions, models are built to answer questions -- and the fact that we are asking new (and in my view much better) questions is a sign of further change to come.

Saturday, January 30, 2016

'Networks and Macroeconomic Shocks'

Daron Acemoglu, Ufuk Akcigit, and William Kerr:

Networks and macroeconomic shocks, , VoxEU: How shocks reverberate throughout the economy has been a central question in macroeconomics. This column suggests that input-output linkages can play an important role in this issue. Supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The authors also find that the initial impact of an industry shock can be substantially amplified due to input-output linkages. 
How shocks propagate through the economy and contribute to fluctuations has been one of the central questions of macroeconomics. We argue that a major mechanism for such propagation is input-output linkages. Through input-output chains, shocks to one industry can influence ‘downstream’ industries that buy inputs from the affected industry, as well as ‘upstream’ industries that produce inputs for the affected industry. These interlinkages can propagate and potentially amplify the initial shock to further firms and industries not directly affected, influencing the macro economy to a much greater extent than the original shock could do on its own.
Introduction
The significance of the idea that a shock to one firm or disaggregated industry could be a major contributor to economic fluctuations was downplayed in Lucas’ (1977) famous essay on business cycles. Lucas suggested that due to the law of large numbers, idiosyncratic shocks to individual firms should cancel each other out when considering the economy in the aggregate, and therefore the broader impact should not be substantial. Recent research, however, has questioned this perspective. For example, Gabaix (2011) shows that when the firm size distribution has very fat tails, the power of the law of large numbers is diminished and shocks to large firms can overwhelm parallel shocks to small firms, allowing such shocks to have a substantial impact on the economy at large. Acemoglu et al. (2012) show how microeconomic shocks can be at the root of macroeconomic fluctuations when the input-output structure of an economy exhibits sufficient asymmetry in the role of some disaggregated industries as (major) suppliers to others.
In Acemoglu et al. (2016), we empirically document the role of input-output linkages as a mechanism for the transmission of industry-level shocks to the rest of the economy. Our approach differs from previous research in two primary ways.
  • First, whereas much prior work has focused on the medium-term implications of such network effects (e.g. over more than a decade), we emphasise the influence of these networks on short-term business cycles (e.g. over 1-3 years).
  • Second, we begin to separate types of shocks to the economy and the differences in how they propagate.
We build a model that predicts that supply-side (e.g. productivity, innovation) shocks primarily propagate downstream, whereas demand-side shocks (e.g. trade, government spending) propagate upstream. For example, a productivity shock to the tire industry will tend to strongly affect the downstream automobile industry, while a shock to government spending in the car industry will reverberate upstream to the tire industry.  We then demonstrate these findings empirically using four historical examples of industry-level shocks, two on the demand side and two on the supply side, and confirm the predictions of the model.
Model and prediction
We model an economy building on Long and Plosser (1983) and Acemoglu et al. (2012), in which each firm produces goods that are either consumed by other firms as inputs or sold in the final goods sector. The model predicts that supply-side (productivity) shocks impact the industry itself and those consuming its goods, while a demand-side shock affects the industry and its suppliers. The total impact of these shocks – taking into account that customers of customers will be also affected in response to supply-side shocks, and suppliers of suppliers will also be affected in response to demand-side shocks – is conveniently summarised by the Leontief inverse that played a central role in traditional input-output analysis.
The intuition behind the asymmetry in propagation for supply versus demand shocks relates to the Cobb-Douglas form of the production function and preferences. If productivity in a given industry is lowered by a shock, firms in that industry will produce fewer goods and the price of their goods will rise. Due to the Cobb-Douglas structure, these effects cancel each other out for upstream firms, leaving them unaffected, while downstream firms feel the increase in prices and consequently lower their overall production. On the other hand, if demand in a certain industry increases, firms in that industry increase production, necessitating a corresponding increase in input production by upstream firms. Because of constant returns to scale, however, the increased demand does not affect prices, and so downstream firms are not changed.
We also incorporate into the model geographic spillovers, showing that shocks in a particular industry will also influence industries that tend to be concentrated in the same area, as shown empirically by Autor et al. (2013) and Mian and Sufi (2014). The idea is that a shock to the first industry will influence local demand generally, and therefore will change demand, output, and employment for other local producers.
Empirics
We test the model’s prediction by examining the implications of four shocks: changes in imports from China; changes in federal government spending; total factor productivity (TFP) shocks; and productivity shocks coming from foreign industry patents. The first two are demand-side shocks; the latter two affect the supply side. For each of these shocks, we show the effects on directly impacted industries as well as upstream and downstream effects. Our core industry-level data is taken from the NBER-CES Manufacturing Industry Database for the years 1991-2009, while input-output linkages were drawn from the Bureau of Economic Analysis’ 1992 Input-Output Matrix and the 1991 County Business Patterns Database.
For brevity we focus here on the first example, where changes in imports from China influence the demand in affected industries. Of course, rising import penetration in the US for a given industry could be endogenous and connected to other factors, such as sagging US productivity growth. We therefore instrument import penetration from China to the US with rising trade from China to eight non-US countries relative to the industry’s market size in the US, following Autor et al. (2013) and Acemoglu et al. (2015). Chinese imports to other countries can be taken as exogenous metrics of the rise of China in trade over the last two decades.
The empirics confirm the predictions of our model. A one standard-deviation increase in imports from China reduces the affected industry’s value added growth by 3.4%, while a similar shock to consumers of that industry’s products leads to a 7.6% decline.
  • In other words, the upstream effect is nearly twice as large as the effect on the directly hit industry in a basic regression.
  • Downstream effects, on the other hand, are of opposite sign and do not change in a statistically significant manner, confirming the model’s prediction.
Figure 1 shows the impulse response function when our framework is adjusted to allow for lags and measure multipliers. Again, a one standard-deviation shock to value added through trade produces network effects that are much greater than the own effects on the industry.
  • We calculate that the effect of a shock to one industry on the entire economy is over six times as large as the effect on the industry itself, due to input-output linkages.
Similar effects are found for employment, and the findings are shown to be robust under many different specification checks.

Figure 1. Response to one SD value-add shock from Chinese imports

Kerr fig1 29 jan

The other three shocks – changes in government spending, TFP shocks and foreign patenting shocks – also broadly support the model’s predictions, with the first leading to upstream effects and the latter two leading to downstream effects. Similarly, extensions quantify that geographical proximity facilitates the propagation of the shocks, particularly those on the demand side. 
Conclusions
Shocks to particular industries can reverberate throughout the economy through networks of firms or industries that supply each other with inputs. Our work shows that these shocks are indeed powerfully transmitted through the input-output chain of the economy, and their initial impact can be substantially amplified. These findings open the way to a systematic investigation of the role of input-output linkages in underpinning rapid expansions and deep recessions, especially once we move away from simple, fully competitive models of the macro economy.
References
Acemoglu, D, U Akcigit, and W Kerr (2016), “Networks and the Macroeconomy: An Empirical Exploration”, NBER Macroeconomics Annual, forthcoming. NBER Working Paper 21344.
Acemoglu, D, V Carvalho, A Ozdaglar, and Al Tahbaz-Salehi (2012), “The Network Origins of Aggregate Fluctuations”, Econometrica, 80:5, 1977-2016.
Acemoglu, D, D Autor, D Dorn, G Hanson, and B Price (2015), “Import Competition and the Great U.S. Employment Sag of the 2000s”, Journal of Labor Economics, 34(S1), S141-S198.
Autor, D, D Dorn, and G Hanson (2013), “The China Syndrome: Local Labor Market Effects of Import Competition in the United States”, American Economic Review, 103:6, 2121-2168.
Gabaix, X (2011), “The Granular Origins of Aggregate Fluctuations”, Econometrica, 79, 733-772.
Long, J and C Plosser (1983), “Real Business Cycles”, Journal of Political Economy, 91:1, 39-69.
Lucas, R (1977), “Understanding Business Cycles”, Carnegie Rochester Conference Series on Public Policy, 5, 7-29.
Mian, A and A Sufi (2014), “What Explains the 2007-2009 Drop in Employment”, Econometrica, 82:6, 2197-2223.

Wednesday, January 13, 2016

'Is Mainstream Academic Macroeconomics Eclectic?'

Simon Wren-Lewis:

Is mainstream academic macroeconomics eclectic?: For economists, and those interested in macroeconomics as a discipline
Eric Lonergan has a short little post that is well worth reading..., it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.
If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. ... My answer is yes and no.
Let’s take the five ‘schools’ that Eric talks about. ... Indeed the variety of models that academic macro currently uses is far wider than this.
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really... This is because these models (unlike those of 40+ years ago) use a common language. ...
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).
It also means that the range of issues that models (DSGE models) can address is also huge. ...
The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.
You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.
I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world.  ...
The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

'The Validity of the Neo-Fisherian Hypothesis'

Narayana Kocherlakota:

Validity of the Neo-Fisherian Hypothesis: Warning: Super-Technical Material Follows

The neo-Fisherian hypothesis is as follows: If the central bank commits to peg the nominal interest rate at R, then the long-run level of inflation in the economy is increasing in R. Using finite horizon models, I show that the neo-Fisherian hypothesis is only valid if long-run inflation expectations rise at least one for one with the peg R. However, in an infinite horizon model, the neo-Fisherian hypothesis is always true. I argue that this result indicates why macroeconomists should use finite horizon models, not infinite horizon models. See this linked note and my recent NBER working paper for technical details.

In any finite horizon economy, the validity of the neo-Fisherian hypothesis depends on how sensitive long-run inflation expectations are to the specification of the interest rate peg.

  • If long-run inflation expectations rise less than one-for-one (or fall) with the interest rate peg, then the neo-Fisherian hypothesis is false.
  • If long-run inflation expectations rise at least one-for-one with the interest rate peg, then the neo-Fisherian hypothesis is true.

Intuitively, when the peg R is high, people anticipate tight future monetary policy. The future tightness of monetary policy pushes down on current inflation. The only way to offset this effect is for long-run inflation expectations to rise sufficiently in response to the peg.

In contrast, in an infinite horizon model, the neo-Fisherian hypothesis is valid - but only because of an odd discontinuity. As the horizon length converges to infinity, the level of inflation becomes infinitely sensitive to long-run inflation expectations. This means that, for almost all specifications of long-run inflation expectations, inflation converges to infinity or negative infinity as the horizon converges to infinity. Users of infinite horizon models typically discard all of these limiting “infinity” equilibria by setting the long-run expected inflation rate to be equal to the difference between R and r*. In this way, the use of an infinite horizon - as opposed to a long but finite horizon - creates a tight implicit restriction on the dependence of long-run inflation expectations on the interest rate peg

To summarize: The validity of the neo-Fisherian hypothesis depends on an empirical question: how do long-run inflation expectations depend on the central bank's peg? This empirical question is eliminated when we use infinite horizon models - but this is a reason not to use infinite horizon models.

In case you missed this from George Evans and Bruce McGough over the holidays (on learning models and the validity of the Neo-Fisherian Hyposthesis, also "super-technical"):

The Neo-Fisherian View and the Macro Learning Approach

I've been surprised that none of the Neo-Fisherians have responded.

Thursday, January 07, 2016

'Confidence as a Political Device'

Simon Wren-Lewis:

Confidence as a political device: This is a contribution to the discussion about models started by Krugman, DeLong and Summers, and in particular to the use of confidence. (Martin Sandbu has an excellent summary, although as you will see I think he is missing something.) The idea that confidence can on occasion be important, and that it can be modeled, is not (in my view) in dispute. For example the very existence of banks depends on confidence (that depositors can withdraw their money when they wish), and when that confidence disappears you get a bank run.
But the leap from the statement that ‘in some circumstances confidence matters’ to ‘we should worry about bond market confidence in an economy with its own central bank in the middle of a depression’ is a huge one...
When people invoke the idea of confidence, other people (particularly economists) should be automatically suspicious. The reason is that it frequently allows those who represent the group whose confidence is being invoked to further their own self interest. The financial markets are represented by City or Wall Street economists, and you invariably see market confidence being invoked to support a policy position they have some economic or political interest in. Bond market economists never saw a fiscal consolidation they did not like, so the saying goes, so of course market confidence is used to argue against fiscal expansion. Employers drum up the importance of maintaining their confidence whenever taxes on profits (or high incomes) are involved. As I argue in this paper, there is a generic reason why financial market economists play up the importance of market confidence, so they can act as high priests. (Did these same economists go on about the dangers of rising leverage when confidence really mattered, before the global financial crisis?)
The general lesson I would draw is this. If the economics point towards a conclusion, and people argue against it based on ‘confidence’, you should be very, very suspicious. You should ask where is the model (or at least a mutually consistent set of arguments), and where is the evidence that this model or set of arguments is applicable to this case? Policy makers who go with confidence based arguments that fail these tests because it accords with their instincts are, perhaps knowingly, following the political agenda of someone else.

Sunday, January 03, 2016

'Musings on Whether We Consciously Know More or Less than What Is in Our Models…'

Brad DeLong:

Musings on Whether We Consciously Know More or Less than What Is in Our Models…: Larry Summers presents as an example of his contention that we know more than is in our models–that our models are more a filing system, and more a way of efficiently conveying part of what we know, than they are an idea-generating mechanism–Paul Krugman’s Mundell-Fleming lecture, and its contention that floating exchange-rate countries that can borrow in their own currency should not fear capital flight in a utility trap. He points to Olivier Blanchard et al.’s empirical finding that capital outflows do indeed appear to be not expansionary but contractionary ...

[There's quite a bit more in Brad's post.]

Wednesday, December 30, 2015

'The Neo-Fisherian View and the Macro Learning Approach'

I asked my colleagues George Evans and Bruce McGough if they would like to respond to a recent post by Simon Wren-Lewis, "Woodford’s reflexive equilibrium" approach to learning:

The neo-Fisherian view and the macro learning approach
George W. Evans and Bruce McGough
Economics Department, University of Oregon
December 30, 2015

Cochrane (2015) argues that low interest rates are deflationary — a view that is sometimes called neo-Fisherian. In this paper John Cochrane argues that raising the interest rate and pegging it at a higher level will raise the inflation rate in accordance with the Fisher equation, and works through the details of this in a New Keynesian model.

Garcia-Schmidt and Woodford (2015) argue that the neo-Fisherian claim is incorrect and that low interest rates are both expansionary and inflationary. In making this argument Mariana Garcia-Schmidt and Michael Woodford use an approach that has a lot of common ground with the macro learning literature, which focuses on how economic agents might come to form expectations, and in particular whether coordination on a particular rational expectations equilibrium (REE) is plausible. This literature examines the stability of an REE under learning and has found that interest-rate pegs of the type discussed by Cochrane lead to REE that are not stable under learning. Garcia-Schmidt and Woodford (2015) obtain an analogous instability result using a new bounded-rationality approach that provides specific predictions for monetary policy. There are novel methodological and policy results in the Garcia-Schmidt and Woodford (2015) paper. However, we will here focus on the common ground with other papers in the learning literature that also argue against the neo-Fisherian claim.

The macro learning literature posits that agents start with boundedly rational expectations e.g. based on possibly non-RE forecasting rules. These expectations are incorporated into a “temporary equilibrium” (TE) environment that yields the model’s endogenous outcomes. The TE environment has two essential components: a decision-theoretic framework which specifies the decisions made by agents (households, firms etc.) given their states (values of exogenous and pre-determined endogenous state variables) and expectations;1 and a market-clearing framework that coordinates the agents’ decisions and determines the values of the model’s endogenous variables. It is useful to observe that, taken together, the two components of the TE environment yield the “TE-map” that takes expectations and (aggregate and idiosyncratic) states to outcomes.

The adaptive learning framework, which is the most popular formulation of learning in macro, proceeds recursively. Agents revise their forecast rules in light of the data realized in the previous period, e.g. by updating their forecast rules econometrically. The exogenous shocks are then realized, expectations are formed, and a new temporary equilibrium results. The equilibrium path under learning is defined recursively. One can then study whether the economy under adaptive learning converges over time to the REE of interest.2

The essential point of the learning literature is that an REE, to be credible, needs an explanation for how economic agents come to coordinate on it. This point is acute in models in which there are multiple RE solutions, as can arise in a wide range of dynamic macro models. This has been an issue in particular in the New Keynesian model, but it also arises, for example, in overlapping generations models and in RBC models with distortions. The macro learning literature provides a theory for how agents might learn over time to forecast rationally, i.e. to come to have RE (rational expectations). The adaptive learning approach found that agents will over time come to have rational expectations (RE) by updating their econometric forecasting models provided the REE satisfies “expectational stability” (E-stability) conditions. If these conditions are not satisfied then convergence to the REE will not occur and hence it is implausible that agents would be able to coordinate on the REE. E-stability then also acts as a selection device in cases in which there are multiple REE.

The adaptive learning approach has the attractive feature that the degree of rationality of the agents is natural: though agents are boundedly rational, they are still fairly sophisticated, estimating and updating their forecasting models using statistical learning schemes. For a wide range of models this gives plausible results. For example, in the basic Muth cobweb model, the REE is learnable if supply and demand have their usual slopes; however, the REE, though still unique, is not learnable if the demand curve is upward sloping and steeper than the supply curve. In an overlapping generations model, Lucas (1986) used an adaptive learning scheme to show that though the overlapping generations model of money has multiple REE, learning dynamics converge to the monetary steady state, not to the autarky solution. Early analytical adaptive learning results were obtained in Bray and Savin (1986) and the formal framework was greatly extended in Marcet and Sargent (1989). The book by Evans and Honkapohja (2001) develops the E-stability principle and includes many applications. Many more applications of adaptive learning have been published over the last fifteen years.

There are other approaches to learning in macro that have a related theoretical motivation, e.g. the “eductive” approach of Guesnerie asks whether mental reasoning by hyper-rational agents, with common knowledge of the structure and of the rationality of other agents, will lead to coordination on an REE. A fair amount is known about the connections between the stability conditions of the alternative adaptive and eductive learning approaches.3 The Garcia-Schmidt and Woodford (2015) “reflective equilibrium” concept provides a new approach that draws on both the adaptive and eductive strands as well as on the “calculation equilibrium” learning model of Evans and Ramey (1992, 1995, 1998). These connections are outlined in Section 2 of Garcia-Schmidt and Woodford (2015).4

The key insight of these various learning approaches is that one cannot simply take RE (which in the nonstochastic case reduces to PF, i.e. perfect foresight) as given. An REE is an equilibrium that begs an explanation for how it can be attained. The various learning approaches rely on a temporary equilibrium framework, outlined above, which goes back to Hicks (1946). A big advantage of the TE framework, when developed at the agent level and aggregated, is that in conjunction with the learning model an explicit causal story can be developed for how the economy evolves over time.

The lack of a TE or learning framework in Cochrane (2011, 2015) is a critical omission. Cochrane (2009) criticized the Taylor principle in NK models as requiring implausible assumptions on what the Fed would do to enforce its desired equilibrium path; however, this view simply reflects the lack of a learning perspective. McCallum (2009) argued that for a monetary rule satisfying the Taylor principle the usual RE solution used by NK modelers is stable under adaptive learning, while the non-fundamental solution bubble solution is not. Cochrane (2009, 2011) claimed that these results hinged on the observability of shocks. In our paper “Observability and Equilibrium Selection,” Evans and McGough (2015b), we develop the theory of adaptive learning when fundamental shocks are unobservable, and then, as a central application, we consider the flexible-price NK model used by Cochrane and McCallum in their debate. We carefully develop this application using an agent-level temporary equilibrium approach and closing the model under adaptive learning. We find that if the Taylor principle is satisfied, then the usual solution is robustly stable under learning, while the non-fundamental price-level bubble solution is not. Adaptive learning thus operates as a selection criterion and it singles out the usual RE solution adopted by proponents of the NK model. Furthermore, when monetary policy does not obey the Taylor principle then neither of the solutions is robustly stable under learning; an interest-rate peg is an extreme form of such a policy, and the adaptive learning perspective cautions that this will lead to instability. We discuss this further below.

The agent-level/adaptive learning approach used in Evans and McGough (2015b) allows us to specifically address several points raised by Cochrane. He is concerned that there is no causal mechanism that pins down prices. The TE map provides this, in the usual way, through market clearing given expectations of future variables. Cochrane also states that the lack of a mechanism means that the NK paradigm requires that the policymakers be interpreted as threatening to “blow up” the economy if the standard solution is not selected by agents.5 This is not the case. As we say in our paper (p. 24-5), “inflation is determined in temporary equilibrium, based on expectations that are revised over time in response to observed data. Threats by the Fed are neither made nor needed ... [agents simply] make forecasts the same way that time-series econometricians typically forecast: by estimating least-squares projections of the variables being forecasted on the relevant observables.”

Let us now return to the issue of interest rate pegs and the impact of changing the level of an interest rate peg. The central adaptive learning result is that interest rate pegs give REE that are unstable under learning. This result was first given in Howitt (1992). A complementary result was given in Evans and Honkapohja (2003) for time-varying interest rate pegs designed to optimally respond to fundamental shocks. As discussed above, Evans and McGough (2015b) show that the instability result also obtains when the fundamental shocks are not observable and the Taylor principle is not satisfied. The economic intuition in the NK model is very strong and is essentially as follows. Suppose we are at an REE (or PFE) at a fixed interest rate and with expected inflation at the level dictated by the Fisher equation. Suppose that there is a small increase in expected inflation. With a fixed nominal interest rate this leads to a lower real interest rate, which increases aggregate demand and output. This in turn leads to higher inflation, which under adaptive learning leads to higher expected inflation, destabilizing the system. (The details of the evolution of expectations and the model dynamics depend, of course, on the precise decision rules and econometric forecasting model used by agents). In an analogous way, expected inflation slightly lower than the REE/PFE level leads to cumulatively lower levels of inflation, output and expected inflation.

Returning to the NK model, additional insight is obtained by considering a nonlinear NK model with a global Taylor rule that leads to two steady states. This model was studied by Benhabib, Schmidt-Grohe and Uribe in a series of papers, e.g. Benhabib, Schmitt-Grohe, and Uribe (2001), which show that with an interest-rate rule following the Taylor principle at the target inflation rate, the zero-lower bound (ZLB) on interest rates implies the existence of an unintended PFE low inflation or deflation steady state (and indeed a continuum of PFE paths to it) at which the Taylor principle does not hold (a special case of which is a local interest rate peg at the ZLB). From a PF/RE viewpoint these are all valid solutions. From the adaptive learning perspective, however, they differ in terms of stability. Evans, Guse, and Honkapohja (2008) and Benhabib, Evans, and Honkapohja (2014) show that the targeted steady state is locally stable under learning with a large basin of attraction, while the unintended low inflation/deflation steady state is not locally stable under learning: small deviations from it lead either back to the targeted steady state or into a deflation trap, in which inflation and output fall over time. From a learning viewpoint this deflation trap should be a major concern for policy.6,7

Finally, let us return to Cochrane (2015). Cochrane points out that at the ZLB peg there has been low but relatively steady (or gently declining) inflation in the US, rather than a serious deflationary spiral. This point echoes Jim Bullard’s concern in Bullard (2010) about the adaptive learning instability result: we effectively have an interest rate peg at the ZLB but we seem to have a fairly stable inflation rate, so does this indicate that the learning literature may here be on the wrong track?

This issue is addressed by Evans, Honkapohja, and Mitra (2015) (EHM2015). They first point out that from a policy viewpoint the major concern at the ZLB has not been low inflation or deflation per se. Instead it is its association with low levels of aggregate output, high levels of unemployment and a more general stagnation. However, the deflation steady state at the ZLB in the NK model has virtually the same level of aggregate output as the targeted steady state. The PFE at the ZLB interest rate peg is not a low level output equilibrium, and if we were in that equilibrium there would not be the concern that policy-makers have shown. (Temporary discount rate or credit market shocks of course can lead to recession at the ZLB but their low output effects vanish as soon as the shocks vanish).

In EHM2015 steady mild deflation is consistent with low output and stagnation at the ZLB.8 They note that many commentators have remarked that the behavior of the NK Phillips relation is different from standard theory at very low output levels. EHM2015 therefore imposes lower bounds on inflation and consumption, which can become relevant when agents become sufficiently pessimistic. If the inflation lower bound is below the unintended low steady state inflation rate, a third “stagnation” steady state is created at the ZLB. The stagnation steady state, like the targeted steady state is locally stable under learning, and arises under learning if output and inflation expectations are too pessimistic. A large temporary fiscal stimulus can dislodge the economy from the stagnation trap, and a smaller stimulus can be sufficient if applied earlier. Raising interest rates does not help in the stagnation state and at an early stage it can push the economy into the stagnation trap.

In summary, the learning approach argues forcefully against the neo- Fisherian view.

Footnotes

1With infinitely-lived agents there are several natural implementations of optimizing decision rules, including short-horizon Euler-equation or shadow-price learning approaches(see, e.g., Evans and Honkapohja (2006) and Evans and McGough (2015a)) and the anticipated utility or infinte-horizon approaches of Preston (2005) and Eusepi and Preston (2010).

2An additional advantage of using learning is that learning dynamics give expanded scope for fitting the data as well as explaining experimental findings.

3The TE map is the basis for the map at the core of any specified learning scheme, which in turn determines the associated stability conditions.

4There are also connections to both the infinite-horizon learning approach to anticipated policy developed in Evans, Honkapohja, and Mitra (2009) and the eductive stability framework in Evans, Guesnerie, and McGough (2015).

5This point is repeated in Section 6.4 of Cochrane (2015): “The main point: such models presume that the Fed induces instability in an otherwise stable economy, a non-credible off-equilibrium threat to hyperinflate the economy for all but one chosen equilibrium.”

6And the risk of sinking into deflation clearly has been a major concern for policymakers in the US, during and following both the 2001 recession and the 2007 - 2009 recession. It has remained a concern in Europe and Japan as well as in Japan during the 1990s.

7Experimnetal work with stylized NK economies has found that entering deflation traps is a real possibility. See Hommes and Salle (2015).

8See also Evans (2013) for a partial and less general version of this argument.

References

Benhabib, J., G. W. Evans, and S. Honkapohja (2014): “Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?,” Journal of Economic Dynamics and Control, 45, 220—238.

Benhabib, J., S. Schmitt-Grohe, and M. Uribe (2001): “The Perils of Taylor Rules,” Journal of Economic Theory, 96, 40—69.

Bray, M., and N. Savin (1986): “Rational Expectations Equilibria, Learning, and Model Specification,” Econometrica, 54, 1129—1160.

Bullard, J. (2010): “Seven Faces of The Peril,” Federal Reserve Bank of St. Louis Review, 92, 339—352.

Cochrane, J. H. (2009): “Can Learnability Save New Keynesian Models?,” Journal of Monetary Economics, 56, 1109—1113.

_______ (2015): “Do Higher Interest Rates Raise or Lower Inflation?, "Working paper, University of Chicago Booth School of Business.

Dixon, H., and N. Rankin (eds.) (1995): The New Macroeconomics: Imperfect Markets and Policy Effectiveness. Cambridge University Press, Cambridge UK.

Eusepi, S., and B. Preston (2010): “Central Bank Communication and Expectations Stabilization,” American Economic Journal: Macroeconomics, 2, 235—271.

Evans, G.W. (2013): “The Stagnation Regime of the New KeynesianModel and Recent US Policy,” in Sargent and Vilmunen (2013), chap. 4.

Evans, G. W., R. Guesnerie, and B. McGough (2015): “Eductive Stability in Real Business Cycle Models,” mimeo.

Evans, G. W., E. Guse, and S. Honkapohja (2008): “Liquidity Traps, Learning and Stagnation,” European Economic Review, 52, 1438—1463.

Evans, G. W., and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.

_______ (2003): “Expectations and the Stability Problem for Optimal Monetary Policies,” Review of Economic Studies, 70, 807—824.

_______ (2006): “Monetary Policy, Expectations and Commitment,” Scandinavian Journal of Economics, 108, 15—38.

Evans, G. W., S. Honkapohja, and K. Mitra (2009): “Anticipated Fiscal Policy and Learning,” Journal of Monetary Economics, 56, 930— 953

_______ (2015): “Expectations, Stagnation and Fiscal Policy,” Working paper, University of Oregon.

Evans, G. W., and B. McGough (2015a): “Learning to Optimize,” mimeo, University of Oregon.

_______ (2015b): “Observability and Equilibrium Selection,” mimeo, University of Oregon.

Evans, G. W., and G. Ramey (1992): “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review, 82, 207—224.

_______ (1995): “Expectation Calculation, Hyperinflation and Currency Collapse,” in Dixon and Rankin (1995), chap. 15, pp. 307—336.

_______ (1998): “Calculation, Adaptation and Rational Expectations,” Macroeconomic Dynamics, 2, 156—182.

Garcia-Schmidt, M., and M. Woodford (2015): “Are Low Interest Rates Deflationary? A Paradox of Perfect Foresight Analysis,” Working paper, Columbia University.

Hicks, J. R. (1946): Value and Capital, Second edition. Oxford University Press, Oxford UK.

Hommes, Cars H., M. D., and I. Salle (2015): “Monetary and Fiscal Policy Design at the Zero Lower Bound: Evidence from the lab,” mimeo., CeNDEF, University of Amsterdam.

Howitt, P. (1992): “Interest Rate Control and Nonconvergence to Rational Expectations,” Journal of Political Economy, 100, 776—800.

Lucas, Jr., R. E. (1986): “Adaptive Behavior and Economic Theory,” Journal of Business, Supplement, 59, S401—S426.

Marcet, A., and T. J. Sargent (1989): “Convergence of Least-Squares Learning Mechanisms in Self-Referential Linear Stochastic Models,” Journal of Economic Theory, 48, 337—368.

McCallum, B. T. (2009): “Inflation Determination with Taylor Rules: Is New-Keynesian Analysis Critically Flawed?,” Journal of Monetary Economic Dynamics, 56, 1101—1108.

Preston, B. (2005): “Learning about Monetary Policy Rules when Long- Horizon Expectations Matter,” International Journal of Central Banking, 1, 81—126.

Sargent, T. J., and J. Vilmunen (eds.) (2013): Macroeconomics at the Service of Public Policy. Oxford University Press.

Sunday, December 20, 2015

'The FTPL Version of the Neo-Fisherian Proposition'

I've never paid much attention to the fiscal theory of the price level:

The FTPL version of the Neo-Fisherian proposition: The Neo-Fisherian doctrine is the idea that a permanent increase in a flat nominal interest rate path will (eventually) raise the inflation rate. It is then suggested that current below target inflation is a consequence of fixing rates at their lower bound, and rates should be raised to increase inflation. David Andolfatto says there are two versions of this doctrine. The first he associates with the work of Stephanie Schmitt-Grohe and Martin Uribe, which I discussed here. He like me is not sold on this interpretation, for I think much the same reason. ... But he favours a different interpretation, based on the Fiscal Theory of the Price Level (FTPL).

Let me first briefly outline my own interpretation of the FTPL. This looks at the possibility of a fiscal regime where there is no attempt to stabilize debt. Government spending and taxes are set independently of the level or sustainability of government debt. The conventional and quite natural response to the possibility of that regime is to say it is unstable. But there is another possibility, which is that monetary policy stabilizes debt. Again a natural response would be to say that such a monetary policy regime is bound to be inconsistent with hitting an inflation target in the long run, but that is incorrect. ...

A constant nominal interest rate policy is normally thought to be indeterminate because the price level is not pinned down, even though the expected level of inflation is. In the FTPL, the price level is pinned down by the need for the government budget to balance at arbitrary and constant levels for taxes and spending. ...

I have a ... serious problem with this FTPL interpretation in the current environment. The belief that people would need to have for the FTPL to be relevant - that the government would not react to higher deficits by reducing government spending or raising taxes - does not seem to be credible, given that austerity is all about them doing exactly this despite being in a recession. As a result, I still find the Neo-Fisherian proposition, with either interpretation, somewhat unrealistic.

Thursday, December 17, 2015

'Sticky' Sales'

Are prices sticky?:

“Sticky” sales, by Phil Davies,The Region, FRG Minneapolis: Sales are ubiquitous in the U.S. economy. Black Friday, President’s Day, Mother’s Day, the Fourth of July; almost any occasion is cause for price cutting, accompanied by prominent signage, balloons and ads in traditional and social media to make the savings known far and wide. Retailers also put on sales ostensibly to clear out inventory, celebrate being on the sidewalk and go out of business.
Economists are interested in sales, not because they want cheap stuff (well, maybe they’re as partial to a deal as anyone), but because the role of sales has a bearing on a question central to macroeconomics: How flexible are prices? Price flexibility—how quickly prices adjust to changes in costs or demand—is crucial to understanding how shocks of any kind, including fiscal and monetary policy, affect economic performance.
Retail prices rise and fall frequently as merchants put items on sale and then restore the regular, or shelf, price. Indeed, the bulk of weekly and monthly variance in individual prices is due to sales promotions, not changes in regular prices. But there’s a lively debate in economics about the true flexibility of sale prices, from a macro perspective; for all their seeming fluidity, how readily do sales respond to changes in underlying costs and unexpected events that alter economic conditions?
How sale prices respond to wholesale cost shocks and broader macroeconomic shocks such as an increase in government spending or monetary policy stimulus, or a decrease in global aggregate demand, affects the flexibility of aggregate retail prices, with profound implications for monetary policy and the accuracy of macroeconomic models that guide policymaking.
Monetary policy as a tool for influencing the economy depends on sticky prices—the idea that prices don’t adjust instantly to shifts in demand caused by changes in money supply. If they did, an increase in demand for goods and services due to monetary easing would trigger an immediate price rise, suppressing demand and leaving economic output and employment unchanged. Thus, the stickier are prices, the more effective is monetary policy in modulating economic growth in the short and medium run. (Economists generally agree that money is neutral in the long run; that is, over a long enough period of time, prices are actually quite flexible, so monetary policy has no long-run effect on the real economy.)
Recent work by Ben Malin, a senior research economist at the Minneapolis Fed, provides insight into the import of temporary sales for price stickiness and thus monetary policy. In “Informational Rigidities and the Stickiness of Temporary Sales” (Minneapolis Fed Staff Report 513), Malin uses a rich data set of prices from a U.S. retail chain to investigate how retail prices adjust in response to wholesale price increases and other economic shocks. Joining Malin in the research are economists Emi Nakamura and Jón Steinsson of Columbia University, and marketing professors Eric Anderson and Duncan Simester of Northwestern University and MIT, respectively.
Surprisingly, the authors find no change in the frequency and depth of price cuts in response to shocks. Their analysis, which also taps micro price data underlying the consumer price index to look at how sales at a representative sample of U.S. retailers respond to booms and downturns, shows that merchants rely exclusively on regular prices to adapt to cost changes and evolving economic conditions. The research “supports the view that the behavior of regular prices is what matters for aggregate price flexibility,” Malin said in interview. ...

Wednesday, December 16, 2015

'The Methodology of Empirical Macroeconomics'

Brad DeLong:

Must-Read: Kevin Hoover: The Methodology of Empirical Macroeconomics: The combination of representative-agent modeling and utility-based “microfoundations” was always a game of intellectual Three-Card Monte. Why do you ask? Why don’t we fund sociologists to investigate for what reasons–other than being almost guaranteed to produce conclusions ideologically-pleasing to some–it has flourished for a generation in spite of having no empirical support and no theoretical coherence?
Kevin Hoover: The Methodology of Empirical Macroeconomics: “Given what we know about representative-agent models…
…there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. ...

Tuesday, December 01, 2015

'The Centrality of Policy to How Long Recessions Last'

Simon Wren-Lewis:

The centrality of policy to how long recessions last: Paul Krugman reminds us that one of the most misguided questions in macroeconomics is ‘are business cycles self-correcting’. ... That answer ... only holds for a particular set of monetary policy rules (plus assumptions about fiscal policy).
It is very easy to see this. Suppose monetary policy is so astute that it knows perfectly all the shocks that hit the economy, and how interest rates influence that economy. In that case absent the Zero Lower Bound the business cycle would disappear, whatever the speed of price adjustment. Or... As Nick Rowe points out, if you had a really bad monetary policy recessions could last forever.
A better answer to both questions (self-correction and how long business cycles last) is it all depends on monetary policy. Actually even that answer makes an implicit assumption, which is that there is no fiscal (de)stabilisation. The correct answer to both questions is that it depends first and foremost on policy. The speed of price adjustment only becomes central for particular policy rules.
So why do many economists (including occasionally some macroeconomists) get this wrong? ... It could be just an unfortunate accident. We are so used to teaching about fixed money supply rules (or in my case Taylor rules), that we can take those rules for granted. But there is also a more interesting answer. To some economists with a particular point of view, the idea that getting policy right might be essential to whether the economy self-corrects from shocks is troubling. ...
Focusing on this logic alone can lead to big mistakes. I have heard a number of times good economists say that in 2015 we can no longer be in a demand deficient recession, because price adjustment cannot be that slow. This mistake happens because they take good policy for granted..., with sub-optimal policy the length of recessions has much more to do with that bad policy than it has to do with the speed of price adjustment.
Just how misleading a focus on the speed of price adjustment can be becomes evident at the Zero Lower Bound. With nominal interest rates stuck at zero, rapid price adjustment will make the recession worse, not better. Price rigidity may be a condition for the existence of business cycles, but it can have very little to do with their duration.

And I as noted in my last column, the evidence is mounting that that poor policy can do more than slow a recovery, it can also permanently reduce our productive capacity.

Saturday, November 28, 2015

'Demand, Supply, and Macroeconomic Models'

Paul Krugman on macroeconomic models:

Demand, Supply, and Macroeconomic Models: I’m supposed to do a presentation next week about “shifts in economic models,” which has me trying to systematize my thought about what the crisis and aftermath have and haven’t changed my understanding of macroeconomics. And it seems to me that there is an important theme here: it’s the supply side, stupid. ...

Friday, November 20, 2015

'Some Big Changes in Macroeconomic Thinking from Lawrence Summers'

Adam Posen:

Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.

The  full text explains these points in more detail (I left out one point on the measurement of productivity).

Thursday, November 05, 2015

'Public Investment: has George Started listening to Economists?'

[Running very late today, so three quick posts to get something up besides links -- I probably chose this one because my name was mentioned. See the sidebar for more new links.]

Simon Wren-Lewis:

Public investment: has George started listening to economists?: I have in the past wondered just how large the majority among academic economists would be for additional public investment right now. The economic case for investing when the cost of borrowing is so cheap (particularly when the government can issue 30 year fixed interest debt) is overwhelming. I had guessed the majority would be pretty large just by personal observation. Economists who are not known for their anti-austerity views, like Ken Rogoff, tend to support additional public investment.
Thanks to a piece by Mark Thoma I now have some evidence. His article is actually about ideological bias in economics, and is well worth reading on that account, but it uses results from the ChicagoBooth survey of leading US economists. I have used this survey’s results on the impact of fiscal policy before, but they have asked a similar question about public investment. It is
“Because the US has underspent on new projects, maintenance, or both, the federal government has an opportunity to increase average incomes by spending more on roads, railways, bridges and airports.”
Not one of the nearly 50 economists surveyed disagreed with this statement. What was interesting was that the economists were under no illusions that the political process in the US would be such that some bad projects would be undertaken as a result (see the follow-up question). Despite this, they still thought increasing investment would raise incomes.
The case for additional public investment is as strong in the UK (and Germany) as it is in the US. Yet since 2010 it appeared the government thought otherwise. ...
However since the election George Osborne seems to have had a change of heart. ...

'Business Cycle Theory vs. Growth Theory'

Nick Rowe:

Business cycle theory vs growth theory: Macroeconomics is divided into (short run) business cycle theory and (long run) growth theory.
Those of us who do business cycle theory have a bit of an inferiority complex (though you might not know it from listening to us argue). Because growth theory seems to be so much more important. Where would you rather live: in a rich country during a recession; or in a poor country during a boom? (Watch the flows of people voting or attempting to vote with their feet if you are not sure how most people would answer.) In the long run, productivity is about the only thing that matters.
We would feel better about ourselves, and what we are studying and teaching, if we could argue that taming the business cycle would improve long run growth.
Notice that I have deliberately personalised this question to make you aware of my personal bias. Macroeconomists like me, who do short run business cycle theory, want to think that what we are doing is important. We want to argue that taming the business cycle would improve long run growth.
(The Great Recession was great for my sort of macro; we haven't had so much fun since the 1970's. The Great Moderation was a boring time for macroeconomists like me, when we seemed to be victims of our own success; all the growth theorists were stealing our limelight.)
Why might business cycles lower the long run growth rate? ...

Tuesday, November 03, 2015

Summers: Advanced Economies are So Sick

Larry Summers:

Advanced economies are so sick we need a new way to think about them: ...Hysteresis Effects Blanchard Cerutti and I look at a sample of over 100 recessions from industrial countries over the last 50 years and examine their impact on long run output levels in an effort to understand what Blanchard and I had earlier called hysteresis effects. We find that in the vast majority of cases output never returns to previous trends. Indeed there appear to be more cases where recessions reduce the subsequent growth of output than where output returns to trend. In other words “super hysteresis” to use Larry Ball’s term is more frequent than “no hysteresis.” ...
In subsequent work Antonio Fatas and I have looked at the impact of fiscal policy surprises on long run output and long run output forecasts using a methodology pioneered by Blanchard and Leigh. ... We find that fiscal policy changes have large continuing effects on levels of output suggesting the importance of hysteresis. ...
Towards a New Macroeconomics My separate comments in the volume develop an idea I have pushed with little success for a long time. Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent this is true of the DSGE (dynamic stochastic general equilibrium) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations not on the level of output. This assumption is problematic...
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics.

Friday, October 30, 2015

'The Missing Lowflation Revolution'

Antonio Fatás:

The missing lowflation revolution: It will soon be eight years since the US Federal Reserve decided to bring its interest rate down to 0%. Other central banks have spent similar number of years (or much longer in the case of Japan) stuck at the zero lower bound. In these eight years central banks have used all their available tools to increase inflation closer to their target and boost growth with limited success. GDP growth has been weak or anemic, and there is very little hope that economies will ever go back to their pre-crisis trends.
Some of these trends have challenged the traditional view of academic economists and policy makers about how an economy works. ...
My own sense is that the view among academics and policy makers is not changing fast enough and some are just assuming that this would be a one-time event that will not be repeated in the future (even if we are still not out of the current event!).
The comparison with the 70s when stagflation produced a large change in the way academic and policy makers thought about their models and about the framework for monetary policy is striking. During those year a high inflation and low growth environment created a revolution among academics (moving away from the simple Phillips Curve) and policy makers (switching to anti-inflationary and independent central banks). How many more years of zero interest rate will it take to witness a similar change in our economic analysis?

Saturday, September 26, 2015

''A Few Less Obvious Answers'' on What is Wrong with Macroeconomics

From an interview with Olivier Blanchard:

...IMF Survey: In pushing the envelope, you also hosted three major Rethinking Macroeconomics conferences. What were the key insights and what are the key concerns on the macroeconomic front? 
Blanchard: Let me start with the obvious answer: That mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.
But let me give you a few less obvious answers:
The financial crisis raises a potentially existential crisis for macroeconomics. Practical macro is based on the assumption that there are fairly stable aggregate relations, so we do not need to keep track of each individual, firm, or financial institution—that we do not need to understand the details of the micro plumbing. We have learned that the plumbing, especially the financial plumbing, matters: the same aggregates can hide serious macro problems. How do we do macro then?
As a result of the crisis, a hundred intellectual flowers are blooming. Some are very old flowers: Hyman Minsky’s financial instability hypothesis. Kaldorian models of growth and inequality. Some propositions that would have been considered anathema in the past are being proposed by "serious" economists: For example, monetary financing of the fiscal deficit. Some fundamental assumptions are being challenged, for example the clean separation between cycles and trends: Hysteresis is making a comeback. Some of the econometric tools, based on a vision of the world as being stationary around a trend, are being challenged. This is all for the best.
Finally, there is a clear swing of the pendulum away from markets towards government intervention, be it macro prudential tools, capital controls, etc. Most macroeconomists are now solidly in a second best world. But this shift is happening with a twist—that is, with much skepticism about the efficiency of government intervention. ...

'Economics: What Went Right'

Paul Krugman returns to a familiar theme:

Economics: What Went Right: ...I’m at EconEd; here are my slides for later today. The theme of my talk is something I’ve emphasized a lot over the past few years: basic macroeconomics has actually worked remarkably well in the post-crisis world, with those of us who took our Hicks seriously calling the big stuff — the effects of monetary and fiscal policy — right, and those who went with their gut getting it all wrong. ...
One thing I do try is to concede that one piece of the conventional story hasn’t worked that well, namely the Phillips curve, where the “clockwise spirals” of previous protracted large output gaps haven’t materialized. Maybe it’s about what happens at very low inflation rates.
What’s notable about the Fed’s urge to raise rates, however, is that Fed officials, including Janet Yellen, are acting as if they have high confidence in their models of inflation dynamics –which is the one thing we really haven’t done well at recently. I really fear that we’re looking at incestuous amplification here.

Agree about the uncertainty about inflation dynamics, but fear Fed officials will interpret it as risks on the upside that must be nullified through interest rate hikes. As for the Phillips curve, here's a graph from his talk:

Image4

As Krugman says, "Maybe it’s about what happens at very low inflation rates." I would add that the combination of the zero bound, low inflation, and downward wage rigidity may be able to explain the change in the Phillips curve -- I'm not quite ready to give up yet.

More generally, estimating inflation dynamics has been far from successful. For example, in many VAR models (a widely used empirical specification for establishing relationships among macroeconomic series), a shock to the federal funds rate often causes prices to go up (theory says they should go down). This can be overcome somewhat by including commodity prices in the model. The idea is that when the Fed expects inflation to go up it raises the federal funds rate, and since the policy does not complete eliminate the inflation, the data will show a positive correlation between the federal funds rate and inflation. Commodity prices are thought to embody and be sensitive to future expected inflation, so including this variable helps to solve the "price puzzle" as it is known. Even so, the results are highly sensitive to specification, and when you work with these models regularly you come away believing that the estimated price dynamics are not very good at all.

But the Fed must forecast in order to do policy. There are lags (though I've argued they are likely shorter than common wisdom suggests), and the Fed must act before a clear picture emerges. The question is how the Fed should react to such uncertainty about its inflation forecasts, and to me -- given the corresponding uncertainties about the state of the labor market and the asymmetric nature of the costs of mistakes about inflation and unemployment (plus the distributional issues -- who gets hurt by each mistake?), it counsels patience rather than urgency on the inflation front.

Tuesday, September 15, 2015

'Keynesianism Explained'

Paul Krugman:

Keynesianism Explained: Attacks on Keynesians in general, and on me in particular, rely heavily on an army of straw men — on knocking down claims about what people like me have predicted or asserted that have nothing to do with what we’ve actually said. But maybe we (or at least I) have been remiss, failing to offer a simple explanation of what it’s all about. I don’t mean the models; I mean the policy implications.
So here’s an attempt at a quick summary, followed by a sampling of typical bogus claims.
I would summarize the Keynesian view in terms of four points:
1. Economies sometimes produce much less than they could, and employ many fewer workers than they should, because there just isn’t enough spending. Such episodes can happen for a variety of reasons; the question is how to respond.
2. There are normally forces that tend to push the economy back toward full employment. But they work slowly; a hands-off policy toward depressed economies means accepting a long, unnecessary period of pain.
3. It is often possible to drastically shorten this period of pain and greatly reduce the human and financial losses by “printing money”, using the central bank’s power of currency creation to push interest rates down.
4. Sometimes, however, monetary policy loses its effectiveness, especially when rates are close to zero. In that case temporary deficit spending can provide a useful boost. And conversely, fiscal austerity in a depressed economy imposes large economic losses.
Is this a complicated, convoluted doctrine? ...
But strange things happen in the minds of critics. Again and again we see the following bogus claims about what Keynesians believe:
B1: Any economic recovery, no matter how slow and how delayed, proves Keynesian economics wrong. See [2] above for why that’s illiterate.
B2: Keynesians believe that printing money solves all problems. See [3]: printing money can solve one specific problem, an economy operating far below capacity. Nobody said that it can conjure up higher productivity, or cure the common cold.
B3: Keynesians always favor deficit spending, under all conditions. See [4]: The case for fiscal stimulus is quite restrictive, requiring both a depressed economy and severe limits to monetary policy. That just happens to be the world we’ve been living in lately.
I have no illusions that saying this obvious stuff will stop the usual suspects from engaging in the usual bogosity. But maybe this will help others respond when they do.

I would add:

5. Keynesian are not opposed to supply-side, growth enhancing policy. They types of taxes that are imposed matters, entrepreneurial activity should be encouraged, and so on. But these arguments should not be used as cover for redistribution of income to the wealthy through tax cuts and other means, or as a means of arguing for cuts to important social service programs. Not should they be used only to support tax cuts. Infrastructure spending is important for growth, an educated, healthy workforce is more productive, etc., etc. Economic growth is about much more than tax cuts for wealthy political donors.

On the other side, I would have added a point to B3:

B3a: Keynesians do not favor large government. They believe that deficits should be used to stimulate the economy in severe recessions (when monetary policy alone is not enough), but they also believe that the deficits should be paid for during good times (shave the peaks to fill the troughs and stabilize the path of GDP and employment). We haven't been very good at the pay for it during good times part, but Democrats can hardly be blamed for that (see tax cuts for the wealthy for openers).

Anything else, e.g. perhaps something like "Keynesians do not believe that helping people in need undermines their desire to work"?

Thursday, August 27, 2015

'The Day Macroeconomics Changed'

Simon Wren-Lewis:

The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.

Wednesday, August 26, 2015

Ray Fair: The Future of Macro

Ray Fair:

The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...

He goes on to explain why. He concludes with:

... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.

Sunday, August 23, 2015

''Young Economists Feel They Have to be Very Cautious''

From an interview of Paul Romer in the WSJ:

...Q: What kind of feedback have you received from colleagues in the profession?

A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”

Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?

A: There was an interesting [2013] Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...

It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...

Friday, August 21, 2015

'Scientists Do Not Demonize Dissenters. Nor Do They Worship Heroes.'

Paul Romer's latest entry on "mathiness" in economics ends with:

Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.

[The reference to Joan Robinson is clarified in the full text.]

Monday, August 17, 2015

Stiglitz: Towards a General Theory of Deep Downturns

This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):

Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation.[2] As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained.[3] These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.[4]
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy.[5] It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...

He concludes with:

IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets.[66] Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.

Tuesday, August 11, 2015

Macroeconomics: The Roads Not Yet Taken

My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:

Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...

The Macroeconomic Divide

Paul Krugman:

Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...

Tuesday, August 04, 2015

'Sarcasm and Science'

On the road again, so just a couple of quick posts. This is Paul Krugman:

Sarcasm and Science: Paul Romer continues his discussion of the wrong turn of freshwater economics, responding in part to my own entry, and makes a surprising suggestion — that Lucas and his followers were driven into their adversarial style by Robert Solow’s sarcasm...
Now, it’s true that people can get remarkably bent out of shape at the suggestion that they’re being silly and foolish. ...
But Romer’s account of the great wrong turn still sounds much too contingent to me...
At least as I perceived it then — and remember, I was a grad student as much of this was going on — there were two other big factors.
First, there was a political component. Equilibrium business cycle theory denied that fiscal or monetary policy could play a useful role in managing the economy, and this was a very appealing conclusion on one side of the political spectrum. This surely was a big reason the freshwater school immediately declared total victory over Keynes well before its approach had been properly vetted, and why it could not back down when the vetting actually took place and the doctrine was found wanting.
Second — and this may be less apparent to non-economists — there was the toolkit factor. Lucas-type models introduced a new set of modeling and mathematical tools — tools that required a significant investment of time and effort to learn, but which, once learned, let you impress everyone with your technical proficiency. For those who had made that investment, there was a real incentive to insist that models using those tools, and only models using those tools, were the way to go in all future research. ...
And of course at this point all of these factors have been greatly reinforced by the law of diminishing disciples: Lucas’s intellectual grandchildren are utterly unable to consider the possibility that they might be on the wrong track.

Sunday, August 02, 2015

'Freshwater’s Wrong Turn'

Paul Krugman follows up on Paul Romer's latest attack on "mathiness":

Freshwater’s Wrong Turn (Wonkish): Paul Romer has been writing a series of posts on the problem he calls “mathiness”, in which economists write down fairly hard-to-understand mathematical models accompanied by verbal claims that don’t actually match what’s going on in the math. Most recently, he has been recounting the pushback he’s getting from freshwater macro types, who seem him as allying himself with evil people like me — whereas he sees them as having turned away from science toward a legalistic, adversarial form of pleading.
You can guess where I stand on this. But in his latest, he notes some of the freshwater types appealing to their glorious past, claiming that Robert Lucas in particular has a record of intellectual transparency that should insulate him from criticism now. PR replies that Lucas once was like that, but no longer, and asks what happened.
Well, I’m pretty sure I know the answer. ...

It's hard to do an extract capturing all the points, so you'll likely want to read the full post, but in summary:

So what happened to freshwater, I’d argue, is that a movement that started by doing interesting work was corrupted by its early hubris; the braggadocio and trash-talking of the 1970s left its leaders unable to confront their intellectual problems, and sent them off on the path Paul now finds so troubling.

Recent tweets, email, etc. in response to posts I've done on mathiness reinforce just how unwilling many are to confront their tribalism. In the past, I've blamed the problems in macro on, in part, the sociology within the profession (leading to a less than scientific approach to problems as each side plays the advocacy game) and nothing that has happened lately has altered that view.

Saturday, August 01, 2015

'Microfoundations 2.0?'

Daniel Little:

Microfoundations 2.0?: The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:

The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)

The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)
Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

So what are the recent challenges? First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In once sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

I've talked about this many times, e.g., but it's worth making this point about aggregating from individual agents to macroeconomic aggregates once again (it deals, for one, with the emergent properties objection above -- it's the reason representative agent models are used, it seems to avoid the aggregation issue). This is from Kevin Hoover:

... Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates that there is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.

Sunday, July 26, 2015

'The F Story about the Great Inflation'

Simon Wren-Lewis:

The F story about the Great Inflation: Here F could stand for folk. The story that is often told by economists to their students goes as follows. After Phillips discovered his curve, which relates inflation to unemployment, Samuelson and Solow in 1960 suggested this implied a trade-off that policymakers could use. They could permanently have a bit less unemployment at the cost of a bit more inflation. Policymakers took up that option, but then could not understand why inflation didn’t just go up a bit, but kept on going up and up. Along came Milton Friedman to the rescue, who in a 1968 presidential address argued that inflation also depended on inflation expectations, which meant the long run Phillips curve was vertical and there was no permanent inflation unemployment trade-off. Policymakers then saw the light, and the steady rise in inflation seen in the 1960s and 1970s came to an end.
This is a neat little story, particularly if you like the idea that all great macroeconomic disasters stem from errors in mainstream macroeconomics. However even a half awake student should spot one small difficulty with this tale. Why did it take over 10 years for Friedman’s wisdom to be adopted by policymakers, while Samuelson and Solow’s alleged mistake seems to have been adopted quickly? Even if you think that the inflation problem only really started in the 1970s that imparts a 10 year lag into the knowledge transmission mechanism, which is a little strange.
However none of that matters, because this folk story is simply untrue. There has been some discussion of this in blogs (by Robert Waldmann in particular - see Mark Thoma here), and the best source on this is another F: James Forder. There are papers (e.g. here), but the most comprehensive source is now his book, which presents an exhaustive study of this folk story. It is, he argues, untrue in every respect. Not only did Samuelson and Solow not argue that there was a permanent inflation unemployment trade-off that policymakers could exploit, policymakers never believed there was such a trade-off. So how did this folk story arise? Quite simply from another F: Friedman himself, in his Nobel Prize lecture in 1977.
Forder discusses much else in his book, including the extent to which Friedman’s 1968 emphasis on the importance of expectations was particularly original (it wasn’t). He also describes how and why he thinks Friedman’s story became so embedded that it became folklore....

Friday, July 24, 2015

Paul Krugman: The M.I.T. Gang

The MIT school of economics:

The M.I.T. Gang, by Paul Krugman, Commentary, NY Times: Goodbye, Chicago boys. Hello, M.I.T. gang.

If you don’t know what I’m talking about, the term “Chicago boys” was originally used to refer to Latin American economists, trained at the University of Chicago, who took radical free-market ideology back to their home countries. The influence of these economists was part of a broader phenomenon: The 1970s and 1980s were an era of ascendancy for laissez-faire economic ideas and the Chicago school...

But that was a long time ago. Now a different school is in the ascendant, and deservedly so.

It’s actually surprising how little media attention has been given to the dominance of M.I.T.-trained economists in policy positions and policy discourse. But it’s quite remarkable. Ben Bernanke has an M.I.T. Ph.D.; so do Mario Draghi, the president of the European Central Bank, and Olivier Blanchard, the enormously influential chief economist of the International Monetary Fund. Mr. Blanchard is retiring, but his replacement, Maurice Obstfeld, is another M.I.T. guy — and another student of Stanley Fischer, who taught at M.I.T. for many years and is now the Fed’s vice chairman. ...

M.I.T.-trained economists, especially Ph.D.s from the 1970s, play an outsized role ... in policy discussion across the Western world. And yes, I’m part of the same gang.

So what distinguishes M.I.T. economics, and why does it matter? ...

At M.I.T..., Keynes never went away. To be sure, stagflation showed that there were limits to what policy can do. But students continued to learn about the imperfections of markets and the role that monetary and fiscal policy can play in boosting a depressed economy. ...

This open-minded, pragmatic approach was overwhelmingly vindicated after crisis struck in 2008. Chicago-school types warned incessantly that responding to the crisis by printing money and running deficits would lead to 70s-type stagflation, with soaring inflation and interest rates. But M.I.T. types predicted, correctly, that inflation and interest rates would stay low in a depressed economy, and that attempts to slash deficits too soon would deepen the slump. ...

Meanwhile, in the United States, Republicans have responded to the utter failure of free-market orthodoxy and the remarkably successful predictions of much-hated Keynesians by digging in even deeper, determined to learn nothing from experience.

In other words, being right isn’t necessarily enough to change the world. But it’s still better to be right than to be wrong, and M.I.T.-style economics, with its pragmatic openness to evidence, has been very right indeed.

Sunday, July 19, 2015

The Rivals (Samuelson and Friedman)

This is by David Warsh:

The Rivals, Economic Principals: When Keynes died, in April 1946, The Times of London gave him the best farewell since Nelson after Trafalgar: “To find an economist of comparable influence one would have to go back to Adam Smith.” A few years later, Alvin Hansen, of Harvard University, Keynes’ leading disciple in the United States, wrote , “It may be a little too early to claim that, along with Darwin’s Origin of Species and Marx’s Capital, The General Theory is one of the most significant book which have appeared in the last hundred years. … But… it continues to gain in importance.”
In fact, the influence of Keynes’ book, as opposed to the vision of “macroeconomics” at the heart of it, and the penumbra of fame surrounding it, already had begun its downward arc. Civilians continued to read the book, more for its often sparkling prose than for the clarity of its argument. Among economists, intermediaries and translators had emerged in various communities to explain the insights the great man had sought to convey. Speaking of the group in Cambridge, Massachusetts, Robert Solow put it this way, many years later: “We learned not as much from it – it was…almost unreadable – as from a number of explanatory articles that appeared on all our graduate school reading lists.”
Instead it was another book that ushered in an era of economics very different from the age before. Foundations of Economic Analysis, by Paul A. Samuelson, important parts of it written as much as ten years before, appeared in 1947. “Mathematics is a Language,” proclaimed its frontispiece; equations dominated nearly every page. “It might be still too early to tell how the discoveries of the 1930s would pan out,” Samuelson wrote delicately in the introduction, but their value could be ascertained only by expressing them in mathematical models whose properties could be thoroughly explored and tested. “The laborious literary working-over of essentially simple mathematical concepts such as is characteristic of much of modern economic theory is not only unrewarding from the standpoint of advancing the science, but involves as well mental gymnastics of a particularly depraved type.”
Foundations had won a prize as a dissertation, so Harvard University was required to publish it as a book. In Samuelson’s telling, the department chairman had to be forced to agree to printing a thousand copies, dragged his feet, and then permitted its laboriously hand-set plates to be melted down for other uses after 887 copies were run off. Thus Foundations couldn’t be revised in subsequent printings, until a humbled Harvard University Press republished an “enlarged edition” with a new introduction and a mathematical appendix in 1983. When Samuelson biographer Roger Backhouse went through the various archival records, he concluded that the delay could be explained by production difficulties and recycling of the lead type by postwar exigencies at Press.
It didn’t matter. With the profession, Samuelson soon would win the day.
The “new” economics that he represented – the earliest developments had commenced in the years after World War I – conquered the profession, high and low. The next year Samuelson published an introductory textbook, Economics, to inculcate the young. Macroeconomic theory was to be put to work to damp the business cycle and, especially, avoid the tragedy of another Great Depression. The new approach swiftly attracted a community away from alternative modes of inquiry, in the expectation that it would yield new solutions to the pressing problem of depression-prevention. Alfred Marshall’s Principles of Economics eventually would be swept completely off the table. Foundations was a paradigm in the Kuhnian sense.
At the very zenith of Samuelson’s success, another sort of book appeared, in 1962, A Monetary History of the United States, 1869-1960, by Milton Friedman and Anna Schwartz, published by the National Bureau of Economic Research. At first glance, the two books had nothing to do with one another. A Monetary History harkened back to approaches that had been displaced by Samuelsonian methods – “hypotheses” instead of theorems; charts instead of models, narrative, not econometric analytics. The volume did little to change the language that Samuelson had established. Indeed, economists at the University of Chicago, Friedman’s stronghold, were on the verge of adapting a new, still- higher mathematical style to the general equilibrium approach that Samuelson had pioneered.
Yet one interpretation of the relationship between the price system and the Daedalean wings that A Monetary History contained was sufficiently striking as to reopen a question thought to have been settled. A chapter of their book, “The Great Contraction,” contained an interpretation of the origins of the Great Depression that gradually came to overshadow the rest. As J. Daniel Hammond has written,
The “Great Contraction” marked a watershed in thinking about the greatest economic calamity in modern times. Until Friedman and Schwartz provoked the interest of economists by rehabilitating monetary history and theory, neither economic theorists nor economic historians devoted as much attention to the Depression as historians.
So you could say that some part of the basic agenda of the next fifty years was ordained by the rivalry that began in the hour that Samuelson and Friedman became aware of each other, perhaps in the autumn of 1932, when both turned up the recently-completed Social Science Research Building of the University of Chicago, at the bottom of the Great Depression. Excellent historians, with access to extensive archives, have been working on both men’s lives and work: Hammond, of Wake Forest University, has largely completed his project on Friedman; Backhouse, of the University of Birmingham, is finishing a volume on Samuelson’s early years. Neither author has yet come across a frank recollection by either man of those first few meetings. Let’s hope one or more second-hand accounts turn up in the papers of the many men and women who knew them then. When I asked Friedman about their relationship in 2005, he deferred to his wife, who, somewhat uncomfortably, mentioned a differential in privilege. I lacked the temerity to ask Samuelson directly the last couple of times we talked; he clearly didn’t enjoy discussing it.
Biography is no substitute for history, much less for theory and history of thought, and journalism is, at best, only a provisional substitute for biography. But one way of understanding what happened in economics in the twentieth century is to view it as an argument between Samuelson and Friedman that lasted nearly eighty years, until one aspect of it, at least, was resolved by the financial crisis of 2008. The departments of economics they founded in Cambridge and Chicago, headquarters in the long wars between the Keynesians and the monetarists, came to be the Athens and Sparta of their day. ...[continue reading]...

[There is much, much more in the full post.]

Saturday, July 04, 2015

'Stability of a Market Economy'

"The macroeconomy is inherently unstable and ... booms and busts arise endogenously as the results of market incentives":

Stability of a market economy, by Paul Beaudry, Dana Galizia, and Franck Portier, Vox EU: There are two polar views about the functioning of a market economy.
  • On the one hand, there is the view that such a system is inherently stable, with market forces tending to direct the economy to a smooth growth path.

According to such a belief, most of the fluctuations in the macroeconomy result from either individually optimal adjustments to changes in the environment or from improper government interventions. In such a case, the role of macroeconomic policy should be to do no harm; if policymakers hold back from actively influencing the economy, market forces would take care of the rest and foster desirable outcomes.

  • On the other hand, there is the view that the market economy is inherently unstable, and that left to itself it will repeatedly go through periods of socially costly booms and busts, with recurrent periods of sustained high levels of unemployment.

According to this view, macroeconomic policy is needed to help stabilize an unruly system.    

Most modern macroeconomic models, such as those used by large central banks and governments, are somewhere in between these two extremes. However, they are by design much closer to the first view than the second, and this is generally not fully appreciated. In fact, most commonly used macroeconomic models have the feature that, in the absence of outside disturbances, the economy is expected to converge to a stable path. In this sense, these models are based on the premise that a decentralized economy is a stable system and that market forces, in of themselves, do not tend to produce boom and busts. The only reason why we see economic cycles in mainstream macroeconomic models is due to outside forces that perturb an otherwise stable system. We can call such a framework the stable-with-shocks view of the macroeconomy.

Stable-with-shocks view of the macroeconomy

There are many reasons why the economic profession has mostly adopted the stable-with-shocks view of macroeconomic fluctuations.

  • First, if we take a step back, and look at aggregate economic outcomes over long periods of time (say 100 years), the most striking feature is the stable growth path (see Figure 1).

Disregarding the two world wars, although the economy fluctuated, these fluctuations were small in comparison to the growth path. In particular, when looking over such long periods, it becomes clear that the economy looks more like a globally stable system than an unstable system.

  • Secondly, a huge fraction of economic theory suggests that market forces will favor stable outcomes. 
  • Thirdly, the stable-with-shocks framework is very tractable and flexible, allowing one to analyze economic outcomes using linear techniques.

Figure 1. Long-run evolution of GDP per capita

Vox1

Source: Bolt and van Zanden (2014).

Figure 2. Unemployment rates

Vox2

Source: FRED, Federal Reserve Bank of St. Louis.

Notwithstanding these attractive features of the stable-with-shocks view of the macroeconomy, the ubiquitous and recurrent nature of cycles in most market economies, as illustrated by the fluctuations of unemployment rates (see Figure 2), strongly suggests that a market economy, by its very nature, may create recurrent boom and bust independently of outside disturbances. This idea is well captured by the statement that “a bust sows the seed of the next boom”. Although, such an idea has a long tradition in the economics literature (Kalecki 1937, Kaldor 1940, Hicks 1950, Goodwin 1951), it is not present in most modern macro-models. 

Capturing economic fluctuations: New framework

In a companion paper (Beaudry et al. 2015), we have developed and explored an empirical framework that allows one to examine whether economic fluctuations may best be captured by the stable-with-shocks type framework or whether they may be better characterized as reflecting some sort of instability. To examine such an issue, one needs to depart from the preponderant convention in macroeconomics of focusing on linear models to analyze outcomes. A frequent criticism of macro-modelling, mostly from non-mainstream macroeconomists, is that the profession’s focus on linear models may have substantially biased our understanding of how the economy actually functions. As Blanchard (2014) writes, “We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time.”

Within a linear set-up, a dynamic system is either stable or unstable. In contrast, in a non-linear setup, a system can be globally stable while simultaneously being locally unstable. It is this latter characteristic that has the potential to be relevant in macroeconomics given that in the long run the economy appears rather stable, while in the short run it exhibits substantial volatility. By looking at the economy through a lens that allows for the possibility of non-linear dynamics, one is de facto permitting an interpretation of the economic fluctuations where endogenous cyclical behavior or even chaos may emerge; both features that are well known to arise in many dynamic environments. In other words, by looking at the economy using non-linear techniques we can ask whether market forces are tending to favor recurrent booms and busts, or whether they favor stability.

Our main finding is that, instead of favoring the conventional stable-with-shocks view for aggregate dynamics, our results suggest that the macroeconomy is inherently unstable and that booms and busts arise endogenously as the results of market incentives.

In fact, we found that for the US economy, market forces tend in of themselves to generate a cycle that lasts about eight years. However, these cycles are not regular or identical over time. Instead, outside forces play an important role in accelerating, amplifying, and postponing the forces that create cycles.   

What causes business cycles?

So what causes the economy to be unstable and exhibit business cycles? According to our analysis, this results from simple incentives that favour the coordination of behavior across households. In particular, in a market economy where individuals face unemployment risk, households have an incentive to buy housing and durable goods at similar time. The reason for these coordinated purchases is that when others are making large purchases, this reduces unemployment; then when unemployment is low, it is a less risky time to make large purchases since taking on debt is easier. However, let us emphasize that we are not finding that business cycles are driven primarily by animal spirits.

  • Instead, we are arguing that business cycles are driven by individually rational, but socially costly, mass behavior based on fundamentals.
  • In our view, the recovery phase of a cycle starts when the stock of housing and durables have been depleted enough to lead some people to go out and make new purchases even if unemployment is still high.

This incites others to do the same, which eventually sustains the recovery and leads to a boom. Interestingly, the boom does not stop when people have the ‘right’ stock of goods, but households instead over-shoot because the boom period is a good time to buy even knowing that a recession will eventually come. Once household have sufficiently over-accumulated, they will in mass stop purchasing, knowing that others are also stopping and knowing that they can wait out a recession benefiting for some time from the services of housing and durables bought during the expansion. The expansion therefore ends and a recession begins. Once this stock of good is again sufficiently depleted, the cycle will restart. Stated this way, business cycles appear very deterministic. However, there are always other developments in the economy that interact with this consumer cycle to create unique features. For example, the consumer cycle generally competes with forces affecting business investment, thereby causing the length and duration of a cycle to be affected by technological developments driving firm investment.  

Concluding remarks

But why should we care if the macroeconomy is locally unstable versus if it is locally stable? Society’s understanding of how the economy functions, especially what creates business cycles, greatly affects how we design stabilization policy. 

In the current dominant paradigm, there is a tendency to see monetary policy as the central tool for mitigating the business cycle. This view makes sense if excessive macroeconomic fluctuations reflect mainly the slow adjustment of wages and prices to outside disturbances within an otherwise stable system. 

However, if the system is inherently unstable and exhibits forces that favor recurrent booms and busts of about seven to ten years intervals, then it is much less likely that monetary policy is the right tool for addressing macroeconomic fluctuations. Instead, in such a case we are likely to need policies aimed at changing the incentives that lead household to bunch their purchasing behavior in the first place.

References

Beaudry, P , D Galizia, and F Portier, “Reviving the Limit Cycle View of Macroeconomic Fluctuations”, CEPR Discussion Paper 10645 and NBER working paper 21241.

Blanchard, O J (2014), “Where Danger Lurks”, Finance & Development, 51(3), 28-31.

Bolt, J and J L van Zanden (2014), “The Maddison Project: collaborative research on historical national accounts”, The Economic History Review, 67 (3): 627–651.

Goodwin, R (1951): “The Nonlinear Accelerator and the Persistence of Business Cycles”, Econometrica, 19(1), 1–17.

Hicks, J (1950), A Contribution to the Theory of the Trade Cycle, Clarendon Press, Oxford.

Kaldor, N (1940), “A Model of the Trade Cycle”, The Economic Journal, 50(197), 78–92.

Kalecki, M (1937), “A Theory of the Business Cycle”, The Review of Economic Studies, 4(2), 77–97.

Sunday, June 14, 2015

'What Assumptions Matter for Growth Theory?'

Dietz Vollrath explains the "mathiness" debate (and also Euler's theorem in a part of the post I left out). Glad he's interpreting Romer -- it's very helpful:

What Assumptions Matter for Growth Theory?: The whole “mathiness” debate that Paul Romer started tumbled onwards this week... I was able to get a little clarity in this whole “price-taking” versus “market power” part of the debate. I’ll circle back to the actual “mathiness” issue at the end of the post.
There are really two questions we are dealing with here. First, do inputs to production earn their marginal product? Second, do the owners of non-rival ideas have market power or not? We can answer the first without having to answer the second.
Just to refresh, a production function tells us that output is determined by some combination of non-rival inputs and rival inputs. Non-rival inputs are things like ideas that can be used by many firms or people at once without limiting the use by others. Think of blueprints. Rival inputs are things that can only be used by one person or firm at a time. Think of nails. The income earned by both rival and non-rival inputs has to add up to total output.
Okay, given all that setup, here are three statements that could be true.
  1. Output is constant returns to scale in rival inputs
  2. Non-rival inputs receive some portion of output
  3. Rival inputs receive output equal to their marginal product
Pick two.
Romer’s argument is that (1) and (2) are true. (1) he asserts through replication arguments, like my example of replicating Earth. (2) he takes as an empirical fact. Therefore, (3) cannot be true. If the owners of non-rival inputs are compensated in any way, then it is necessarily true that rival inputs earn less than their marginal product. Notice that I don’t need to say anything about how the non-rival inputs are compensated here. But if they earn anything, then from Romer’s assumptions the rival inputs cannot be earning their marginal product.
Different authors have made different choices than Romer. McGrattan and Prescott abandoned (1) in favor of (2) and (3). Boldrin and Levine dropped (2) and accepted (1) and (3). Romer’s issue with these papers is that (1) and (2) are clearly true, so writing down a model that abandons one of these assumptions gives you a model that makes no sense in describing growth. ...
The “mathiness” comes from authors trying to elide the fact that they are abandoning (1) or (2). ...

[There's a lot more in the full post. Also, Romer comments on Vollrath here.]

Tuesday, June 09, 2015

'What is it about German Economics?'

Can you help Simon Wren-Lewis figure this out?:

What is it about German economics?: ...Keynesian ideas are pretty mainstream elsewhere...: why does macroeconomics in Germany seem to be an outlier? Given the damage done by austerity in the Eurozone, and the central role that the views of German policy makers have played in that, this is a question I have asked for many years. The textbooks used to teach macroeconomics in Germany seem to be as Keynesian as elsewhere, yet Peter Bofinger is the only Keynesian on their Council of Economic Experts, and he confirmed to me how much this minority status is typical. [1]
There are two explanations that are popular outside Germany that I now think on their own are inadequate. The first is that Germany is preoccupied by inflation as a result of the hyperinflation of the Weimar republic, and that this spills over into their attitude to government debt. (The recession of the 1930s helped create a more serious disaster, and here is a provocative account of why the memory of hyperinflation dominates.) A second idea is that Germans are culturally debt averse, and people normally note that the German for debt is also their word for guilt. The trouble with both stories is that they imply that German government debt should be much lower than in other countries, but it is not. (In 2000, the German government’s net financial liabilities as a percentage of GDP were at the same level as France, and slightly above the UK and US.) ...
It is as if in some respects economic thinking in Germany has not moved on since the 1970s: Keynesian ideas are still viewed as anti-market rather than correcting market failure...
One of the distinctive characteristics of the German economy appears to be very far from neoliberalism, and that is co-determination: the importance of workers organisations in management, and more generally the recognition that unions play an important role in the economy. Yet I wonder whether this may have had an unintended consequence: the polarisation and politicisation of economic policy advice. ... If conflict over wages is institutionalised at the national level, perhaps the influence of ideology on economic policy - in so far as it influences that conflict (see footnote [1]) - is bound to be greater. 
As you can see, I remain some way from answering the question posed in the title of this post, but I think I’m a bit further forward than I was.

Saturday, June 06, 2015

'A Crisis at the Edge of Physics'

Seems like much the same can be said about modern macroeconomics (except perhaps the "given the field its credibility" part):

A Crisis at the Edge of Physics, by Adam Frank and Marcelo Gleiser, NY Times: Do physicists need empirical evidence to confirm their theories?
You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”
Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility. ...

'Views Differ on Shape of Macroeconomics'

Paul Krugman:

Views Differ on Shape of Macroeconomics: The doctrine of expansionary austerity ... was immensely popular among policymakers in 2010, as the great turn toward austerity began. But the statistical underpinnings of the doctrine fell apart under scrutiny... So at this point research economists overwhelmingly believe that austerity is contractionary (and that stimulus is expansionary). ...

Nonetheless, Simon Wren-Lewis points us to Robert Peston of the BBC declaring

I am simply pointing out that there is a debate here (though Krugman, Wren-Lewis and Portes are utterly persuaded they’ve won this match – and take the somewhat patronising view that voters who think differently are ignorant sheep led astray by a malign or blinkered media).

Wow. Yes, I suppose that “there is a debate” — there are debates about lots of things, from climate change to evolution to alien spaceships hidden in Area 51. But to suggest that this debate is at all symmetric is just wrong — and deeply misleading to one’s audience.

As for the claim that it’s somehow patronizing to suggest that voters are ill-informed when (a) macroeconomics is a technical subject, and (b) the media have indeed misreported the state of the professional debate — well, this is sort of an economic version of the line that one must not suggest that the Iraq war was launched on false pretenses, because this would be disrespectful to the troops. If you’re being accused of misleading reporting, it’s hardly a defense to say that the public believed your misinformation — more like a self-indictment. ...

Wednesday, June 03, 2015

'Coordination Equilibrium and Price Stickiness'

This is the introduction to a relatively new working paper by Cidgem Gizem Korpeoglu and Stephen Spear (sent in response to my comment that I've been disappointed with the development of new alternatives to the standard NK-DSGE models):

Coordination Equilibrium and Price Stickiness, by Cidgem Gizem Korpeoglu (University College London) Stephen E. Spear (Carnegie Mellon): 1 Introduction Contemporary macroeconomic theory rests on the three pillars of imperfect competition, nominal price rigidity, and strategic complementarity. Of these three, nominal price rigidity (aka price stickiness) has been the most important. The stickiness of prices is a well-established empirical fact, with early observations about the phenomenon going back to Alfred Marshall. Because the friction of price stickiness cannot occur in markets with perfect competition, modern micro-founded models (New Keynesian or NK models, for short) have been forced to abandon the standard Arrow-Debreu paradigm of perfect competition in favor of models where agents have market power and set market prices for their own goods. Strategic complementarity enters the picture as a mechanism for explaining the kinds of coordination failures that lead to sustained slumps like the Great Depression or the aftermath of the 2008 …financial crisis. Early work by Cooper and John laid out the importance of these three features for macroeconomics, and follow-on work by Ball and Romer showed that failure to coordinate on price adjustments could itself generate strategic complementarity, effectively unifying two of the three pillars.
Not surprisingly, the Ball and Romer work was based on earlier work by a number of authors (see Mankiw and Romer's New Keynesian Economics) which used the model of Dixit and Stiglitz of monopolistic competition as the basis for price-setting behavior in a general equilibrium setting, combined with the idea of menu costs -- literally the cost of posting and communicating price changes -- and exogenously-specified adjustment time staggering to provide the friction(s) leading to nominal rigidity. While these models perform well in explaining aspects of the business cycle, they have only recently been subjected to what one would characterize as thorough empirical testing, because of the scarcity of good data on how prices actually change. This has changed in the past decade as new sources of data on price dynamics have become available, and as computational power capable of teasing out what might be called the "…fine structure" of these dynamics has emerged. On a different dimension, the overall suitability of monopolistic competition as the appropriate form of market imperfection to use as the foundation of the new macro models has been largely unquestioned, though we believe this is largely due to the tractability of the Dixit-Stiglitz model relative to other models of imperfect competition generated by large …fixed costs or increasing returns to scale not due to specialization.
In this paper, we examine both of these underlying assumptions in light of what the new empirics on pricing dynamics has found, and propose a different, and we believe, better microfoundation for New Keynesian macroeconomics based on the Shapley-Shubik market game.

Krugman vs. DeLong

Krugman vs. DeLong has an outcome that follows DeLong's rule:

Krugman: The Inflationista Puzzle: Martin Feldstein has a new column on what he calls the “inflation puzzle” — the failure of inflation to soar despite the Fed’s large asset purchases, which led to a very large rise in the monetary base. As Tony Yates points out, however, there’s nothing puzzling at all about what happened; it’s exactly what you should expect when interest rates are near zero.
And this isn’t an ex-post rationale, it’s what many of us were saying from the beginning. Traditional IS-LM analysis said that the Fed’s policies would have little effect on inflation; so did the translation of that analysis into a stripped-down New Keynesian framework that I did back in 1998, starting the modern liquidity-trap literature. ...
DeLong: New Economic Thinking, Hicks-Hansen-Wicksell Macro, and Blocking the Back Propagation Induction-Unraveling from the Long Run Omega Point: ... Whatever may be going on in the short run must thus be transitory in duration, moderate in their effects, and limited in the distance it can can push the economy away from its proper long run equilibrium. And it certainly cannot keep it there. Not for long.
This is the real critique of Paul Krugman’s “depression economics”. Paul can draw his Hicksian IS-LM diagrams of an economy stuck in a liquidity trap...
He can draw his Wicksellian I=S diagrams of how the zero lower bound forces the market interest rate above the natural interest rate at which planned investment balances savings that would be expected were the economy at full employment...
Paul can show, graphically, that conventional monetary policy is then completely ineffective–swapping two assets that are perfect substitutes for each other. Paul can show, graphically, that expansionary fiscal policy is then immensely powerful and has no downside: it does not generate higher interest rates; it does not crowd out productive private investment; and, because interest rates are zero, it entails no financing burden and thus no required increase in future tax wedges. But all this is constrained and limited by the inescapable and powerful logic of the induction-unraveling propagating itself back through the game tree from the Omega Point that is the long run equilibrium. In the IS-LM diagram, the fact that the long run is out there means that even the contemplation of permanent expansion of the monetary base is rapidly moving the IS curve up and to the right, and thus leading the economy to quickly exist the liquidity trap. In the Wicksellian I=S diagram, the fact that the long run is out there means that even the contemplation of permanent expansion of the monetary base is rapidly moving the I=S curve up so that the zero lower bound will soon no longer constrain the economy away from its full-employment equilibrium.
The “depression economics” equilibrium Paul plots on his graph is a matter for today–a month or two, or a quarter or two, or at most a year or two. ...
Krugman:Backward Induction and Brad DeLong (Wonkish): Brad DeLong is, unusually, unhappy with my analysis in a discussion of the inflationista puzzle — the mystery of why so many economists failed to grasp the implications of a liquidity trap, and still fail to grasp those implications despite 6 years of being wrong. Brad sorta-kinda defends the inflationistas on the basis of backward induction; I find myself somewhat baffled by that defense.

Actually, I find myself baffled both theoretically and empirically. ...

In the end, while the post-2008 slump has gone on much longer than even I expected (thanks in part to terrible fiscal policy), and the downward stickiness of wages and prices has been more marked than I imagined, overall the model those of us who paid attention to Japan deployed has done pretty well — and it’s kind of shocking how few of those who got everything wrong are willing to learn from their failure and our success.
DeLong: Paul Krugman Was Right. I, Ken Rogoff, Marty Feldstein, and Many, Many Others Were Wrong: The question is: Why were we wrong? We had, after all, read, learned, and taught the same Hicks-Hansen-Wicksell-Metzler-Tobin macro that was Paul Krugman’s foundation. ...

I want to highlight one of Brad's points. Theoretical models often act as if there is only one type of demand shock, and the short-run depends upon a single variable, e.g. the time period when inflation expectations are wrong. But the short-run depends upon the type of recession we experience, and the variable that signals the length of the recovery will not be the same in every case. A monetary induced recession will have a much shorter short-run than a balance sheet recession induced by a financial collapse, and an recession caused by an oil price shock will recover differently from both. Early in the Great Recession, policymakers, analysts, and most economists did not fully recognize that this recession truly was different, and hence required a different policy approach from the recessions in recent memory. Krugman, due to his work on Japan, did see this early on, but it took time for the notion of a balance sheet recession to take hold, and we never fully adopted fiscal policy to deal with this fact (e.g. sufficient help with rebuilding household balance sheets). To me this in one of the big lessons of the Great Recession -- we must figure out the type of recession we are experiencing, realize that the "short-run" will depend critically on the type of shock causing the recession, and adopt our policies accordingly. If we can do that, then maybe the short-run won't be a decade long the next time we have a balance sheet recession. And there will be a next time.

Monday, June 01, 2015

'The Case of the Missing Minsky'

Paul Krugman says I'm not upbeat enough about the state of macroeconomics:

The Case of the Missing Minsky: Gavyn Davis has a good summary of the recent IMF conference on rethinking macro; Mark Thoma has further thoughts. Thoma in particular is disappointed that there hasn’t been more of a change, decrying

the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Maybe surprisingly, I’m a bit more upbeat than either. Of course there are economists, and whole departments, that have learned nothing, and remain wholly dominated by mathiness. But it seems to be that economists have done OK on two of the big three questions raised by the economic crisis. What are these three questions? I’m glad you asked. ...[continue]...

Sunday, May 31, 2015

'Has the Rethinking of Macroeconomic Policy Been Successful?'

The beginning of a long discussion from Gavyn Davies:

Has the rethinking of macroeconomic policy been successful?: The great financial crash of 2008 was expected to lead to a fundamental re-thinking of macro-economics, perhaps leading to a profound shift in the mainstream approach to fiscal, monetary and international policy. That is what happened after the 1929 crash and the Great Depression, though it was not until 1936 that the outline of the new orthodoxy appeared in the shape of Keynes’ General Theory. It was another decade or more before a simplified version of Keynes was routinely taught in American university economics classes. The wheels of intellectual change, though profound in retrospect, can grind fairly slowly.
Seven years after 2008 crash, there is relatively little sign of a major transformation in the mainstream macro-economic theory that is used, for example, by most central banks. The “DSGE” (mainly New Keynesian) framework remains the basic workhorse, even though it singularly failed to predict the crash. Economists have been busy adding a more realistic financial sector to the structure of the model [1], but labour and product markets, the heart of the productive economy, remain largely untouched.
What about macro-economic policy? Here major changes have already been implemented, notably in banking regulation, macro-prudential policy and most importantly the use of the central bank balance sheet as an independent instrument of monetary policy. In these areas, policy-makers have acted well in advance of macro-economic researchers, who have been struggling to catch up. ...

There has been more progress on the theoretical front than I expected, particularly in adding financial sector frictions to the NK-DSGE framework and in overcoming the restrictions imposed by the representative agent model. At the same time, there has been less progress than I expected in developing alternatives to the standard models. As far as I can tell, a serious challenge to the standard model has not yet appeared. My biggest disappointment is how much resistance there has been to the idea that we need to even try to find alternative modeling structures that might do better than those in use now, and the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.

Sunday, May 17, 2015

'Blaming Keynes'

Simon Wren-Lewis:

Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...

'Ed Prescott is No Robert Solow, No Gary Becker'

Paul Romer continues his assault on "mathiness":

Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.

There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.

Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.

What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?

The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...

[He goes on to give more details using examples from the papers.]

Friday, May 15, 2015

'Mathiness in the Theory of Economic Growth'

Paul Romer:

My Paper “Mathiness in the Theory of Economic Growth”: I have a new paper in the Papers and Proceedings Volume of the AER that is out in print and on the AER website. A short version of the supporting appendix is available here. It should eventually be available on the AER website but has not been posted yet. A longer version with more details behind the calculations is available here.

The point of the paper is that if we want economics to be a science, we have to recognize that it is not ok for macroeconomists to hole up in separate camps, one that supports its version of the geocentric model of the solar system and another that supports the heliocentric model. As scientists, we have to hold ourselves to a standard that requires us to reach a consensus about which model is right, and then to move on to other questions.

The alternative to science is academic politics, where persistent disagreement is encouraged as a way to create distinctive sub-group identities.

The usual way to protect a scientific discussion from the factionalism of academic politics is to exclude people who opt out of the norms of science. The challenge lies in knowing how to identify them.

From my paper:

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Persistent disagreement is a sign that some of the participants in a discussion are not committed to the norms of science. Mathiness is a symptom of this deeper problem, but one that is particularly damaging because it can generate a broad backlash against the genuine mathematical theory that it mimics. If the participants in a discussion are committed to science, mathematical theory can encourage a unique clarity and precision in both reasoning and communication. It would be a serious setback for our discipline if economists lose their commitment to careful mathematical reasoning.

I focus on mathiness in growth models because growth is the field I know best, one that gave me a chance to observe closely the behavior I describe. ...

The goal in starting this discussion is to ensure that economics is a science that makes progress toward truth. ... Science is the most important human accomplishment. An investment in science can offer a higher social rate of return than any other a person can make. It would be tragic if economists did not stay current on the periodic maintenance needed to protect our shared norms of science from infection by the norms of politics.

[I cut quite a bit -- see the full post for more.]

Saturday, May 02, 2015

'Needed: New Economic Frameworks for a Disappointing New Normal'

Brad DeLong ends a post on the need for "New Economic Frameworks for a Disappointing New Normal" with:

... Our government, here in the U.S. at least, has been starved of proper funding for infrastructure of all kinds since the election of Ronald Reagan. Our confidence in our institutions’ ability to manage aggregate demand properly is in shreds–and for the good reason of demonstrated incompetence and large-scale failure. Our political system now has a bias toward austerity and idle potential workers rather than toward expansion and inflation. Our political system now has a bias away from desirable borrow-and-invest. And the equity return premium is back to immediate post-Great Depression levels–and we also have an enormous and costly hypertrophy of the financial sector that is, as best as we can tell, delivering no social value in exchange for its extra size.
We badly need a new framework for thinking about policy-relevant macroeconomics given that our new normal is as different from the late-1970s as that era’s normal was different from the 1920s, and as that era’s normal was different from the 1870s.
But I do not have one to offer.

Friday, April 24, 2015

'Unit Roots, Redux'

John Cochrane weighs in on the discussion of unit roots:

Unit roots, redux: Arnold Kling's askblog and Roger Farmer have a little exchange on GDP and unit roots. My two cents here.
I did a lot of work on this topic a long time ago, in How Big is the Random Walk in GNP?  (the first one)  Permanent and Transitory Components of GNP and Stock Prices” (The last, and I think best one) "Multivariate estimates" with Argia Sbordone, and "A critique of the application of unit root tests", particularly appropriate to Roger's battery of tests.
The conclusions, which I still think hold up today:
Log GDP has both random walk and stationary components. Consumption is a pretty good indicator of the random walk component. This is also what the standard stochastic growth model predicts: a random walk technology shock induces a random walk component in output but there are transitory dynamics around that value.
A linear trend in GDP is only visible ex-post, like a "bull" or "bear" market.  It's not "wrong" to detrend GDP, but it is wrong to forecast that GDP will return to the linear trend or to take too seriously correlations of linearly detrended series, as Arnold mentions. Treating macro series as cointegrated with one common trend is a better idea.
Log stock prices have random walk and stationary components. Dividends are a pretty good indicator of the random walk component. (Most recently, here.) ...
Both Arnold and Roger claim that unemployment has a unit root. Guys, you must be kidding. ...

He goes on to explain.