Category Archive for: Methodology [Return to Main]

Tuesday, February 06, 2018

On the Uses (and Abuses) of Economath: The Malthusian Models

Douglas Campbell:

On the Uses (and Abuses) of Economath: The Malthusian Models: Many American undergraduates in Economics interested in doing a Ph.D. are surprised to learn that the first year of an Econ Ph.D. feels much more like entering a Ph.D. in solving mathematical models by hand than it does with learning economics. Typically, there is very little reading or writing involved, but loads and loads of fast algebra is required. Why is it like this? ...

Saturday, July 15, 2017

How to Think Like an Economist (If, That Is, You Wish to...)

Brad DeLong:

How to Think Like an Economist (If, That Is, You Wish to...): I have long had a "thinking like an economist" lecture in the can. But I very rarely give it. It seems to me that it is important stuff—that people really should know it before they begin studying economics, because it would make studying economics much easier. But it also seems to me—usually—that it is pointless to give it at the start of a course to newBs: they just won't understand it. And it also seems to me—usually—that it is also pointless to give it to students at the end of their college years: they either understand it already, or it is too late.
By continuity that would seem to imply that there is an optimal point in the college curriculum to teach this stuff. But is that true?
What do you think?
+ + + +
Every new subject requires new patterns of thought; every intellectual discipline calls for new ways of thinking about the world. After all, that is what makes it a discipline: a discipline that allows people to think about a subject in some new way. Economics is no exception.
In a way, learning an intellectual discipline like economics is similar to learning a new language or being initiated into a club. Economists’ way of thinking allows us to see the economy more sharply and clearly than we could in other ways. (Of course, it can also cause us to miss certain relationships that are hard to quantify or hard to think of as purchases and sales; that is why economics is not the only social science, and we need sociologists, political scientists, historians, psychologists, and anthropologists as well.) In this chapter we will survey the intellectual landmarks of economists’ system of thought, in order to help you orient yourself in the mental landscape of economics.
Economics: What Kind of Discipline Is It? ...

Saturday, July 08, 2017

What Economics Models Really Say

"Why is there such an enormous gulf between what economists know and what they say in public?":

What Economics Models Really Say A Review of Economics Rules: The Rights and Wrongs of the Dismal Science by Dani Rodrik (Norton, 2015) Peter Turchin University of Connecticut Seshat: Global History Databank: [This work is made available under the terms of the Creative Commons Attribution 4.0 license, http://creativecommons.org/licenses/by/4.0/ ]
The blurb on the jacket of Economics Rules says, “In this sharp, masterful book, Dani Rodrik, a leading critic from within, takes a close look at economics to examine when it falls short and when it works, to give a surprisingly upbeat account of the discipline.” I heartily agree with nearly all of this, with the exception of the “upbeat” part. As I will explain toward the end of this review, my view of economics, and, especially, of the role that economists play in public policy, is much more critical.
A central theme in the book is the role of mathematical models in economics. Formal models in economics and other social sciences are often disparaged. According to the critics (who include some economists, many other social scientists, and the overwhelming majority of historians), models oversimplify complex reality, employ unrealistic assumptions, and deny “agency” to human beings.
Rodrik rejects this critique. According to him, mathematical models— “simplifications designed to show how specific mechanisms work by isolating them from other, confounding effects”—are the true strength of economics. A simplified description of reality is not a shortcoming, it’s the essence of a good model.
My own training was in mathematical biology, and as a graduate student during the 1980s I saw the tail end of the “Math Wars” in ecology. By the 1990s the war was won, and any respectable department of ecology and evolution had to have on faculty at least one modeler. Today, the great majority of ecologists agree that a science cannot become a Science until and unless it develops a well-articulated body of mathematical theory.
In the social sciences, different disciplines made this transition at different times, with economics leading the pack and laggards, like history, undergoing this transition only now (hence cliodynamics—“history as science”; it’s worth noting that most American historians consider history not as a social science, but as one of the humanities).
I was, thus, a bit bemused to read Rodrik’s defense of mathematical models (haven’t economists resolved the Math Wars already?). But it’s an excellent defense—all aspiring cliodynamicists should read Economics Rules, if only for this reason.
The list of reasons why we need mathematical models in a scientific discipline is familiar to all who have extensive experience in modeling (and for those who don’t have such experience, I suggest you read Chapters 1 and 2 of Economics Rules). Models clarify the logic of hypotheses, ensure that predictions indeed follow from the premises, open our eyes to counterintuitive possibilities, suggest how predictions could be tested, and enable accumulation of knowledge. The advantage of clarity that mathematical models offer scientists is nicely illustrated in the following quote from Economics Rules: “We still have endless debates today about what Karl Marx, John Maynard Keynes, or Joseph Schumpeter really meant. … By contrast, no ink has ever been spilled over what Paul Samuelson, Joe Stiglitz, or Ken Arrow had in mind when they developed the theories that won them their Nobel.” The difference? The first three formulated their theories largely in verbal form, while the latter three developed mathematical models.
The value of the book, however, is in more than just weighing in on the usefulness of mathematical models. As Rodrik notes early in the book, “economics is by and large the only social science that remains almost entirely impenetrable to those who have not undertaken the requisite apprenticeship in graduate school.” And economics is “impenetrable” not because of mathematical models, at least not to someone trained in mathematical natural sciences (the math is universal), but because economists have developed an entirely distinct jargon that sets them apart from other disciplines and creates artificial barriers to understanding the many truly worthwhile insights from economics models.
Because I have not “undertaken the requisite apprenticeship”, I found very useful Rodrik’s explanations of the insights generated by such classic models in economics as the First Fundamental Theorem of Welfare Economics, the Principle of Comparative Advantage, and the General Theory of Second Best. Particularly illuminating were the discussion of what happens to the fundamental result of a model when we start systematically relaxing various assumptions on which it depends. This part of the book, together with the references that Rodrik provides, could serve as a basis for an excellent mini-course on what economics theory really tells us.
And a general take-home message that emerges from this discussion is that if we want to understand Big Questions—when do markets work or fail, what makes economies grow, and what are the effects of deficit spending—there is not one fundamental model, “the Model”. Instead, we need to study an array of models, each telling a partial story.
So far so good. But Rodrik, in my opinion, goes too far in denying the value of general theory. At one point he writes, “society does not have fundamental laws— at least, not quite in the same way that nature does.” And: “the same theory of evolution applies in both Northern and Southern Hemispheres,” but “economic models are different.”
Not really. Let’s take the theory of evolution. It’s not a single model. It’s a theoretical framework that includes hundreds, perhaps thousands of special case models, each telling only a partial story. To give an example, textbooks on evolutionary theory often start with a single-locus two-allele model (which gives us the famous Hardy-Weinberg Equilibrium). But you will need different models for haploid organisms (such as bacteria, who have a single unpaired chromosome), or for organisms reproducing asexually; and yet another set of models for phenotypic selection. Despite such diversity of modeling approaches, there is a theoretical unity in evolutionary biology. In particular, the conceptual framework of evolutionary theory provides a set of guidelines for the theoreticians on which model to use in which context.
And I don’t see how the situation is different in economics (and, more generally, social sciences). Yes, there is a multiplicity of models in economics, but you can’t just select one randomly (or worse, “cherry pick” among the results to suit your ideological agenda). There are rules for choosing appropriate models, and Rodrik devotes Chapter 3 of his book to explaining general principles of model selection in economics. In other words, theoretical frameworks are not simply compendia of models, they also include model selection rules (and a few other things).
Rodrik, thus, sells short the potential for general theory in social sciences. Naturally, economics, in particular, does not have such an elaborate, well-articulated, and empirically validated theoretical framework as evolutionary biology (and evolutionary biology, in turn, lags behind many subdisciplines of physics). But who is to say that economics will not develop to the same level in the future? We’ll see if we live long enough.
Let’s now shift gears and talk about Chapter 5, “When Economists Go Wrong.” To make the following discussion concrete, I will focus on a particular theoretical result in economics, the Principle of Comparative Advantage, and what this principle implies for trade policy. In popular press, of course, comparative advantage is always used as a justification for advocating free trade. Rodrik does an admirable job explaining why, under many conditions, free trade can lead to really negative consequences for economies and populations of countries that open themselves to international competition. For example, there is strategic behavior. A country may choose to protect its domestic industry with high tariffs and subsidize its exports in order to gain market share. Perhaps its leaders don’t understand the Principle of Comparative Advantage, not having the benefit of apprenticeship in economics. Or perhaps they care more about their country's long-term survival in an anarchic international environment than about making immediate profit.
In one particularly revealing passage in the book, Rodrik writes,
consider how opening up trade—one of the key items of the Washington Consensus—was supposed to work. As barriers to imports were slashed, firms that were unable to compete internationally would shrink or close down, releasing their resources (workers, capital, managers) to be employed in other parts of the economy. More efficient, internationally competitive sectors, meanwhile, would expand, absorbing those resources and setting the stage for more rapid economic growth. In Latin American and African countries that adopted this strategy, the first part of this prediction largely materialized, but not the second. Manufacturing firms, previously protected by import barriers, took a big hit. But the expansion of new, export-oriented activities based on modern technologies lagged. Workers flooded less productive, informal service sectors such as petty trading instead. Overall productivity suffered. [italics are mine]
Washington Consensus outcomes in Latin America and Africa stand in sharp contrast with the experience of Asian countries. … Instead of liberalizing imports early on, South Korea, Taiwan, and later China all began their export push by directly subsidizing homegrown manufacturing. … All of them undertook industrial policies to nurture new manufacturing sectors and reduce their economies’ dependence on natural resources.
As Rodrik correctly stresses, these cases do not prove that standard economics is wrong. In short, “someone who advocates free trade because it will benefit everyone probably does not understand how comparative advantage really works.”
Models that were developed for “the way markets really work—or fail to work—in low-income settings with few firms, high barriers to entry, poor information, and malfunctioning institutions, these alternative models proved indispensable”—by telling us why countries that followed the Washington Consensus failed, and those who threw it to the wind succeeded.
But then how does one explain that nearly all economists—96 percent— strongly agree with the following statement: “Free trade improves the productive efficiency and offers consumers better choices, and in the long run these gains are much larger than any effects on unemployment” (Politicians Should Listen to Economists on Free Trade, by Bryan Riley, The Heritage Foundation, Feb.1, 2013; this was from a survey conducted by the University of Chicago’s Booth School of Business).
Rodrik argues that “the problem has to do more with the way economists present themselves in public than with the substance of the discipline.” “In public, the tendency is to close ranks and support free markets and free trade.”
But why is there such an enormous gulf between what economists know and what they say in public? One possible explanation is that policies, such as free trade, while often harming broad swaths of populations, tend to benefit narrow segments of economic elites. Perhaps the critics from the left (and a few “heterodox economists”) are right when they charge that economists speak what the powers-that-be want us to hear.
Whatever the explanation, I cannot agree that Rodrik’s book gives us “a surprisingly upbeat account of the discipline.” Economics may be a vibrant discipline, but most of the richness of its insights is hidden in academic publications behind the shield of specialist jargon, impenetrable to those who have not taken the requisite apprenticeship. And by closing ranks and unconditionally supporting free markets and free trade, economists have failed us, the general public. This is why we need more books like Economics Rules—so that we can find out what economics models really tell us.

Thursday, May 18, 2017

Darwin Visits Wall Street

“biology is a closer fit to economics than physics”:

Darwin Visits Wall Street, by Peter Dizikes, MIT News: If you have money in the stock market, then you are probably anticipating a profit over the long term — a rational expectation given that stocks have historically performed well. But when stocks plunge, even for one day, you may also feel some fear and want to dump all those stress-creating equities.
There is a good reason for this: You’re human.
And that means, to generalize, that you have both a rational side and some normal human emotions. To Andrew Lo, the Charles E. and Susan T. Harris Professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management, accepting this basic point means we should also rethink some common ideas about how markets work.
In economics and finance, after all, there is a long tradition of thinking about investors as profit-maximizing rational actors, while imagining that markets operate near a state of perfect efficiency. That sounds nice in theory. But evidence shows that this view is not sufficient for understanding the radical swings that market sentiment creates.  
“When you and I are making investment decisions independently, we’ll exhibit different behavior,” Lo says. Those varied decisions help keep markets stable, most of the time. “But when we all feel threatened at the same time, we’re likely to react in the same way. And if we all start selling stocks at once, we get a market crash and panic. Fear can overwhelm rationality.”
Now Lo has written a new book about the subject, “Adaptive Markets,” published this month by Princeton University Press. In the book, he draws on insights from evolutionary theory, psychology, neuroscience, and artificial intelligence to paint a new picture of investors. Instead of regarding investors simply as either rational or irrational, Lo explains how their behavior may be “maladaptive” — unsuited to the rapidly changing environments that shifting markets present.  
In so doing, Lo would like to resolve the divergence between the realities of human behavior and the long-standing “efficient markets hypothesis” (EMH) of finance with his own “adaptive markets hypothesis,” to account for the dynamics of markets — and to provide new regulatory mechanisms to better ward off damaging crashes.
“It takes a theory to beat a theory,” Lo quips, “and behavioralists haven’t yet put forward a theory of human behavior.”
Path-dependent
To get a grip on Lo’s thinking, briefly examine both sides of the EMH debate. On the one hand, markets do exhibit significant efficiencies. Do you own a mutual fund that tracks a major stock-market index? That’s because it is very hard for individual investors or fund managers to beat indexes over an extended period of time. On the other hand, based on what we know about market swings and investor behavior, it seems a stretch to think markets are always efficient.
“The EMH is a very powerful theory that has added a great deal of value to investors, portfolio managers, and regulators,” Lo says. “I don’t want to be viewed as criticizing it. What I’m hoping to do is to expand its reach, by explaining under which conditions it’s likely to work well, and under which other conditions we require a different approach.”
As Lo notes in the book, the EMH assumes that individuals always maximize their expected utility — they find the optimal way to spend and invest, all the time. Lo’s adaptive markets hypothesis relaxes this dictum on two counts. First, a successful investing adaptation doesn’t have to be the best of all possible adaptations — it just has to work fairly well at a given time.
And second, Lo’s adaptive markets hypothesis does not hold that people will constantly be finding the best possible investments. Instead, as he writes in the book, “consumer behavior is highly path-dependent,” based on what has worked well in the past.
Given those conditions, the market equivalent of natural selection weeds out poor investment strategies, Lo writes, and “ensures that consumer behavior is, while not necessarily optimal or ‘rational,’ good enough.” Not perfect, but decent.
In this light, consider fund managers who do beat the big stock indexes for a while. In many cases, their successes are followed by years of poor performance. Why? Because they did not keep adapting to a changing investing environment. This familiar dynamic, Lo contends, is one reason we should drop the physics-inspired notions of the market as an efficient mechanism, and think of it in evolutionary terms.
Or, as Lo writes in the book, “biology is a closer fit to economics than physics.” As the physicist Richard Feynman put it, “Imagine how much harder physics would be if electrons had feelings.”
Looking for policy impact
“Adaptive Markets” does not represent the first time Lo has put some of these ideas into print. It is the culmination of a long-term line of inquiry, and the most detailed, extended treatment he has given to the concept.
The book is written for a general audience but has received a wide hearing in academia. Nobuhiro Kiyotaki, an economist at Princeton University, calls “Adaptive Markets” a “wonderful book” that “presents many valuable findings” and “is itself a manifestation of the important finding that rational thinking and emotion go together.”
Lo says his hope for the book, however, is not just to change some minds among the public and other scholars, but to reach policymakers. Having served on multiple government advisory panels about regulation, Lo believes we need regulations that are more generally focused on limiting risk and large-scale crashes, rather than seeking to assess the legitimacy of umpteen new financial instruments.
The analogy Lo likes to make is that finance needs an equivalent of the National Transportation Safety Board, the federal agency that investigates the systemic causes of aviation accidents, among other things, and whose existence has helped engender a period of unprecedented air safety.
Even in the run-up to the 2008 financial-sector crisis, Lo contends, the notorious bond markets trading securities backed by subprime mortgages, and their derivatives, were not deeply “irrational.” After all, those markets had winners as well as losers; the problems included the way the markets were constructed and the opportunity for firms to wildly increase their risks while seeking big payoffs. 
“It’s not so much that market prices were wrong, it’s that the policies and incentives were flawed,” Lo contends.
That might generate some heated debate, but Lo says it is a discussion he welcomes.
“We aren’t really getting traction arguing either for or against efficient markets,” Lo says. “So maybe it’s time for a new perspective.”

Tuesday, April 25, 2017

What Can Be Done to Improve the Episteme of Economics?

Brad DeLong:

What Can Be Done to Improve the Episteme of Economics?: I think this is needed:
INET: Education Initiative: "We are thrilled that you are joining us at the Berkeley Spring 2017 Education Convening, Friday, April 28th 9am-5pm Blum Hall, B100 #5570, Berkeley, CA 94720-5570... https://www.ineteconomics.org/education/curricula-modules/education-initiative
...Sign up here: https://fs24.formsite.com/inet/form97/index.html or email [email protected]...
I strongly share INET's view that things have gone horribly wrong, and that it is important to listen, learn, and brainstorm about how to improve economics education.
Let me just note six straws in the wind:
The macro-modeling discussion is wrong: The brilliant Olivier Blanchard https://piie.com/blogs/realtime-economic-issues-watch/need-least-five-classes-macro-models: "The current core... RBC (real business cycle) structure [model] with one main distortion, nominal rigidities, seems too much at odds with reality.... Both the Euler equation for consumers and the pricing equation for price-setters seem to imply, in combination with rational expectations, much too forward-lookingness.... The core model must have nominal rigidities, bounded rationality and limited horizons, incomplete markets and the role of debt..."
The macro-finance discussion is wrong: The efficient market hypothesis (EMH) claimed that movements in stock indexes were driven either by (a) changing rational expectations of future cash flows or by (b) changing rational expectations of interest rates on investment-grade bonds, so that expected returns were either (a) unchanged or (b) moved roughly one-for-one with returns on investment grade bonds. That claim lies in total shreds. Movements in stock indexes have either no utility-theoretic rationale at all or must be ascribed to huge and rapid changes in the curvature of investors' utility functions. Yet Robert Lucas claims that the EMH is perfect, perfect he tells us http://www.economist.com/node/14165405: "Fama tested the predictions of the EMH.... These tests could have come out either way, but they came out very favourably.... A flood of criticism which has served mainly to confirm the accuracy of the hypothesis.... Exceptions and 'anomalies' [are]... for the purposes of macroeconomic analysis and forecasting... too small to matter..."
The challenge posed by the 2007-9 financial crisis is too-often ignored: Tom Sargent https://www.minneapolisfed.org/publications/the-region/interview-with-thomas-sargent: "I was at Princeton then.... There were interesting discussions of many aspects of the financial crisis. But the sense was surely not that modern macro needed to be reconstructed.... Seminar participants were in the business of using the tools of modern macro, especially rational expectations theorizing, to shed light on the financial crisis..."
What smart economists have to say about policy is too-often dismissed: Then-Treasury Secretary Tim Geithner, according to Zach Goldfarb https://www.washingtonpost.com/blogs/wonkblog/post/geithner-stimulus-is-sugar-for-the-economy/2011/05/19/AGz9JvLH_blog.html: "The economic team went round and round. Geithner would hold his views close, but occasionally he would get frustrated. Once, as [Christina] Romer pressed for more stimulus spending, Geithner snapped. Stimulus, he told Romer, was 'sugar', and its effect was fleeting. The administration, he urged, needed to focus on long-term economic growth, and the first step was reining in the debt.... In the end, Obama signed into law only a relatively modest $13 billion jobs program, much less than what was favored by Romer and many other economists in the administration..."
The competitive model has too great a hold: "Brad, you're the only person I've ever heard say that Card-Krueger changed their mind on how much market power there is in the labor market..."
The problem is of very long standing indeed: John Maynard Keynes (1926) https://www.panarchy.org/keynes/laissezfaire.1926.html: "Some of the most important work of Alfred Marshall-to take one instance-was directed to the elucidation of the leading cases in which private interest and social interest are not harmonious. Nevertheless, the guarded and undogmatic attitude of the best economists has not prevailed against the general opinion that an individualistic laissez-faire is both what they ought to teach and what in fact they do teach..."

Monday, April 10, 2017

Blanchard: On the Need for (At Least) Five Classes of Macro Models

Olivier Blanchard:

On the Need for (At Least) Five Classes of Macro Models: One of the best pieces of advice Rudi Dornbusch gave me was: Never talk about methodology. Just do it. Yet, I shall disobey and take the plunge.
The reason and the background for this blog is a project started by David Vines about DSGEs, how they performed in the crisis, and how they could be improved.[1] Needled by his opinions, I wrote a PIIE Policy Brief. Then, in answer to the comments to the brief, I wrote a PIIE RealTime blog. And yet a third, another blog, each time hopefully a little wiser. I thought I was done, but David organized a one-day conference on the topic, from which I learned a lot and which has led me to write my final (?) piece on the topic.
This piece has a simple theme: We need different types of macro models. One type is not better than the other. They are all needed, and indeed they should all interact. Such remarks would be trivial and superfluous if that proposition were widely accepted, and there were no wars of religion. But it is not, and there are.
Here is my attempt at typology, distinguishing between five types. (I limit myself to general equilibrium models. Much of macro must, however, be about building the individual pieces, constructing partial equilibrium models, and examining the corresponding empirical micro and macro evidence, pieces on which the general equilibrium models must then build.) In doing so, I shall, with apologies, repeat some of what was in the previous blogs. ...

Friday, February 17, 2017

NAIRU Bashing

Simon Wren-Lewis:

NAIRU bashing: The NAIRU is the level of unemployment at which inflation is stable. Ever since economists invented the concept people have poked fun at how difficult to measure and elusive the NAIRU appears to be, and these articles often end with the proclamation that it is time we ditched the concept. Even good journalists can do it. But few of these attempts to trash the NAIRU answer a very simple and obvious question - how else do we link the real economy to inflation? ...
The NAIRU is one of those economic concepts which is essential to understand the economy but is extremely difficult to measure. ...
While we should not be obsessed by the 1970s, we should not wipe it from our minds either. Then policy makers did in effect ditch the NAIRU, and we got uncomfortably high inflation. In 1980 in the US and UK policy changed and increased unemployment, and inflation fell. There is a relationship between inflation and unemployment, but it is just very difficult to pin down. For most macroeconomists, the concept of the NAIRU really just stands for that basic macroeconomic truth. ...

Tuesday, February 07, 2017

The Great Recession: A Macroeconomic Earthquake

Larry Christiano on why the Great Recession happened, why it lasted so long, why it wasn't foreseen, and how it’s changing macroeconomic theory (the excerpt below is about the last of these, how it's changing theory):

The Great Recession: A Macroeconomic Earthquake, Federal Reserve Bank of Minneapolis: ...Impact on macroeconomics The Great Recession is having an enormous impact on macroeconomics as a discipline, in two ways. First, it is leading economists to reconsider two theories that had largely been discredited or neglected. Second, it has led the profession to find ways to incorporate the financial sector into macroeconomic theory.

Neglected paradigms
At its heart, the narrative described above characterizes the Great Recession as the response of the economy to a negative shock to the demand for goods all across the board. This is very much in the spirit of the traditional macroeconomic paradigm captured by the famous IS-LM (or Hicks-Hansen) model,9 which places demand shocks like this at the heart of its theory of business cycle fluctuations. Similarly, the paradox-of-thrift argument10 is also expressed naturally in the IS-LM model.

 The IS-LM paradigm, together with the paradox of thrift and the notion that a decision by a group of people11 could give rise to a welfare-reducing drop in output, had been largely discredited among professional macroeconomists since the 1980s. But the Great Recession seems impossible to understand without invoking paradox-of-thrift logic and appealing to shocks in aggregate demand. As a consequence, the modern equivalent of the IS-LM model— the New Keynesian model—has returned to center stage.12 (To be fair, the return of the IS-LM model began in the late 1990s, but the Great Recession dramatically accelerated the process.)

The return of the dynamic version of the IS-LM model is revolutionary because that model is closely allied with the view that the economic system can sometimes become dysfunctional, necessitating some form of government intervention. This is a big shift from the dominant view in the macroeconomics profession in the wake of the costly high inflation of the 1970s. Because that inflation was viewed as a failure of policy, many economists in the 1980s were comfortable with models that imply markets work well by themselves and government intervention is typically unproductive.

Accounting for the financial sector
The Great Recession has had a second important effect on the practice of macroeconomics. Before the Great Recession, there was a consensus among professional macroeconomists that dysfunction in the financial sector could safely be ignored by macroeconomic theory. The idea was that what happens on Wall Street stays on Wall Street—that is, it has as little impact on the economy as what happens in Las Vegas casinos. This idea received support from the U.S. experiences in 1987 and the early 2000s, when the economy seemed unfazed by substantial stock market volatility. But the idea that financial markets could be ignored in macroeconomics died with the Great Recession.

Now macroeconomists are actively thinking about the financial system, how it interacts with the broader economy and how it should be regulated. This has necessitated the construction of new models that incorporate finance, and the models that are empirically successful have generally integrated financial factors into a version of the New Keynesian model, for the reasons discussed above. (See, for example, Christiano, Motto and Rostagno 2014.)

Economists have made much progress in this direction, too much to summarize in this brief essay. One particularly notable set of advances is seen in recent research by Mark Gertler, Nobuhiro Kiyotaki and Andrea Prestipino. (See Gertler and Kiyotaki 2015 and Gertler, Kiyotaki and Prestipino 2016.) In their models, banks finance long-term assets with short- term liabilities. This liquidity mismatch between assets and liabilities captures the essential reason that real world financial institutions are vulnerable to runs. As such, the model enables economists to think precisely about the narrative described above (and advocated by Bernanke 2010 and others) about what launched the Great Recession in 2007. Refining models of this kind is essential for understanding the root causes of severe economic downturns and for designing regulatory and other policies that can prevent a recurrence of disasters like the Great Recession.

Monday, January 09, 2017

Narrative Economics and the Laffer Curve

Tim Taylor:

Narrative Economics and the Laffer Curve: Robert Shiller delivered the Presidential Address for the American Economic Association on the subject of "Narrative Economics" in Chicago on January 7, 2017. A preliminary version of the underlying paper, together with slides from the presentation, is available here.

Shiller's broad point was that the key distinguishing trait of human beings may be that we  organize what we know in the form of stories.  He argues:

"Some have suggested that it is stories that most distinguish us from animals, and even that our species be called Homo narrans (Fisher 1984) or Homo narrator (Gould 1994) or Homo narrativus (Ferrand and Weil 2001) depending on whose Latin we use.  Might this be a more accurate description than Homo sapiens, i.e., wise man? Or might we say "narrative is intelligence" (Lo, 2007), with all of its limitations? It is more flattering to think of ourselves as Homo sapiens, but not necessarily more accurate."

Shiller goes on to make a case that narratives play a role in economic activity: for example, the way people act during the steep recession of 1920-21 and the Great Depression, as well as in the Great Recession and the most recent election. To me, one of his themes is that economist should seek to bring the narratives of these times that economic actors were telling themselves into their actual analysis by applying epidemiology models to examine actual spread of narratives, rather than bewailing narratives as a sort of unfair complication for the purity of our economic models.

Near the start, Shiller offers the Laffer Curve as an example of a narrative that had some lasting force. For those not familiar with the story, here's how Shiller tells it (footnotes omitted):

Let us consider as an example the narrative epidemic associated with the Laffer curve, a diagram created by economist Arthur Laffer ... The story of the Laffer curve did not go viral in 1974, the reputed date when Laffer first introduced it. Its contagion is explained by a literary innovation that was first published in a 1978 article in National Affairs by Jude Wanniski, an editorial writer for the Wall Street Journal. Wanniski wrote the colorful story about Laffer sharing a steak dinner at the Two Continents [restaurant] in Washington D.C. in 1974 with top White House powers Dick Cheney [at the time, a Deputy Assistant to President Ford, later to be Vice President] and Donald Rumsfeld (at the time Chief of Staff to President Ford, later to be Secretary of Defense]. Laffer drew his curve on a napkin at the restaurant table.  When news about the "curve drawn on a napkin" came out, with Wanniski's help, the story surprisingly went viral, so much that it is now commemorated. A napkin with the Laffer curve can be seen at the National Museum of American History ... 

Why did this story go viral? Laffer himself said after the Wanniski story exploded that he himself could not remember the event, which had taken place four years earlier. But Wanniski was a journalist who sensed that he had the elements of a good story. The key idea as Wanniski presented it is, indeed, punchy: At a zero-percent tax rate, the government collects no revenue. At a 100% tax rate the government would also collect no revenue, because people will not work if all the income is taken. Between the two extremes, the curve, relating tax revenue to tax rate, must have an inverted U shape. ...

Here is a notion of economic efficiency easy enough for anyone to understand. Wanniski suggested, without any data, that we are on the inefficient side of the Laffer curve. Laffer's genius was in narratives, not data collection. The drawing of the Laffer curve seems to suggest that cutting tax rates would produce a huge windfall in national  income. To most quantitatively-inclined people unfamiliar with economics, this explanation of economic inefficiency was a striking concept, contagious enough to go viral, even though economists, even though economists protested that we are not actually on the inefficient side of the Laffer Curve (Mirowski 1982). It is apparently impossible to capture why it is doubtful that we are on the inefficient side of the Laffer curve in so punch a manner that it has the ability to stifle the epidemic. Years later Laffer did refer broadly to the apparent effects of historic tax cuts (Laffer 2004); but in 1978 the narrative dominated. To tell the story really well one must set the scene at the fancy restaurant, with powerful Washington people and the napkin.

Here an image of what must be one of history's best-known napkins from the National Museum of American History, which reports that the exhibit was "made" on September 14, 1974, and measures 38.1 cm x 38.1 cm x .3175 cm, and was a gift from Patricia Koyce Wanniski:

Did Laffer really pull out a pen and start writing on a cloth napkin at a fancy restaurant, so that Jude Wanniski could take the napkin away with him? The website of the Laffer Center at the Pacific Research Institute describes it this way:

"As to Wanniski’s recollection of the story, Dr. Laffer has said that he cannot remember the details, but he does recall that the restaurant where they ate used cloth napkins and his mother had taught him not to desecrate nice things. He notes, however, that it could well be true because he used the so-called Laffer Curve all the time in classroom lectures and to anyone else who would listen." 

In the mid-1980s, when I was working as an editorial writer for the San Jose Mercury News in California, I interviewed Laffer when he was running for a US Senate seat.  He was energy personified and talked a blue streak, and I can easily imagine him writing on cloth napkins in a restaurant. When remembering the event 40 years later in 2014, Dick Cheney said:

It was late afternoon, sort of the-end-of-the-day kind of thing. As I recall, it was a round table. I remember a white tablecloth and white linen napkins because that’s what [Laffer] drew the curve on. It was just one of those events that stuck in my mind, because it’s not every day you see somebody whip out a Sharpie and mark up the cloth napkin at the dinner table. I remember it well, because I can’t recall anybody else drawing on a cloth napkin.

The point of Shiller's talk is that while a homo sapiens discussion of the empirical evidence behind the Laffer curve can be interesting in its own way, understanding the political and cultural impulse behind tax-cutting from the late 1970s up to the present requires genuine intellectual opennees to a homo narrativus explanation--that is, an understanding of what narratives have force at certain times, how such narratives come into being, why the narratives are powerful, and how the narratives affect various forms of economic behavior.

My own sense is that homo sapiens can be a slippery character in drawing conclusions. Homo sapiens likes to protest that all conclusions come from a dispassionate consideration of the evidence. But again and again, you will observe that when a certain homo sapiens agrees with the main thrust of a certain narrative, the supposedly dispassionate consideration of evidence involves compiling every factoid and theory in support, as well as denigrating those who believe otherwise as liars and fools; conversely, when a different homo sapiens disagrees with the main thrust of certain narrative, the supposedly dispassionate consideration of the evidence involves compiling every factoid and theory in opposition, and again denigrating those who believe otherwise as liars and fools. Homo sapiens often brandishes facts and theories as a nearly transparent cover for the homo narrativus within.

Wednesday, December 14, 2016

Thomas Schelling, Methodological Subversive

Rajiv Sethi:

Thomas Schelling, Methodological Subversive: Thomas Schelling died at the age of 95 yesterday.

At a time when economic theory was becoming virtually synonymous with applied mathematics, he managed to generate deep insights into a broad range of phenomena using only close observation, precise reasoning, and simple models that were easily described but had complex and surprising properties.

This much, I think, is widely appreciated. But what also characterized his work was a lack of concern with professional methodological norms. This allowed him to generate new knowledge with great freedom, and to make innovations in method that may end up being even more significant than his specific insights into economic and social life. 

Consider, for instance, his famous "checkerboard" model of self-forming neighborhoods, first introduced in a memorandum in 1969, with versions published in a 1971 article and in his 1978 book Micromotives and Macrobehavior. This model is simple enough to be described verbally in a couple of paragraphs, but has properties that are extremely difficult to deduce analytically. It is also among the very earliest agent-based computational models, reveals some limitations of the equilibrium approach in economic theory, and continues to guide empirical research on residential segregation.

Here's the model. There is a set of individuals partitioned into two groups; let's call them pennies and dimes. Each individual occupies a square on a checkerboard, and has preferences over the group composition of its neighborhood. The neighborhood here is composed of the (at most) eight adjacent squares. Each person is content to be in a minority in their neighborhood, as long as minority status is not too extreme. Specifically, each wants strictly more than one-third of their neighbors to belong to their own group. 

Initially suppose that there are 60 individuals, arrayed in a perfectly integrated pattern on the board, with the four corners unoccupied. Then each individual in a central location has exactly half their neighbors belonging to their own group, and is therefore satisfied. Those on the edges are in a slightly different situation, but even here each individual has a neighborhood in which at least two-fifths of residents are of their own type. So they too are satisfied.

Now suppose that we remove twenty individuals at random, and replace five of these, placing them in unoccupied locations, also at random. This perturbation will leave some individuals dissatisfied. Now choose any one of these unhappy folks, and move them to a location at which they would be content. Notice that this affects two types of other individuals: those who were previously neighbors of the party that moved, and those who now become neighbors. Some will be unaffected by the move, others may become happy as a result, and still others may become unhappy. 

As long as there are any unhappy people on the board, repeat the process just described: pick one at random, and move them to a spot where they are content. What does the board look like when nobody wants to move?

Schelling found that no matter how often this experiment was repeated, the result was a highly segregated residential pattern. Even though perfect integration is clearly a potential terminal state of the dynamic process just described, it appeared to be unreachable once the system had been perturbed. The assumed preferences are tolerant enough to be consistent with integration, but decentralized, uncoordinated choices by individuals appear to make integration fragile, and segregation extremely stable. Here's how Schelling summarized the insight:

People who have to choose between polarized extremes... will often choose in a way that reinforces the polarization. Doing so is no evidence that they prefer segregation, only that, if segregation exists and they have to choose between exclusive association, people elect like rather than unlike environments.

One can tune the parameters of the model: the population size and density, or the preferences over neighborhood composition, and see that this key insight is robust. And for reasons discussed in this essay, equilibrium reasoning alone cannot be used to uncover it. 

A very different kind of contribution, but also one with important methodological implications, may be found in Schelling's 1960 classic The Strategy of Conflict. Here he considers the adaptive value of pretending to be irrational, in order to make threats or promises credible (emphasis added):

How can one commit himself in advance to an act that he would in fact prefer not to carry out in the event, in order that his commitment may deter the other party? One can of course bluff, to persuade the other falsely that the costs or damages to the threatener would be minor or negative. More interesting, the one making the threat may pretend that he himself erroneously believes his own costs to be small, and therefore would mistakenly go ahead and fulfill the threat. Or perhaps he can pretend a revenge motivation so strong as to overcome the prospect of self-damage; but this option is probably most readily available to the truly revengeful

Similarly, in bargaining situations, "the sophisticated negotiator may find it difficult to seem as obstinate as a truly obstinate man." And when faced with a threat, it may be profitable to be known to possess "genuine ignorance, obstinacy or simple disbelief, since it may be more convincing to the prospective threatener."

Starting with three classic papers in the same 1982 issue of the Journal of Economic Theory, a large literature in economics has dealt with the implications for rational behavior of interacting with parties who, with small likelihood, may not be rational. While this work has focused on characterizing rational responses to irrationality, Schelling's point speaks also to payoffs, and raises the possibility that departures from irrationality may have adaptive value

The methodological implications of this are profound, because the idea calls into question the normal justification for assuming that economic agents are in fact fully rational. Jack Hirshleifer explored the implications of this in a wonderful paper on the adaptive value of emotions, and Robert Frank wrote an entire book about the topic. But the idea is right there, hidden in plain sight, in Schelling's parenthetical comments.  

Finally, consider Schelling's burglar paradox, also described in The Strategy of Conflict:

If I go downstairs to investigate a noise at night, with a gun in my hand, and find myself face to face with a burglar who has a gun in his hand, there is a danger of an outcome that neither of us desires. Even if he prefers to just leave quietly, and I wish him to, there is danger that he may think I want to shoot, and shoot first. Worse, there is danger that he may think that I think he wants to shoot. Or he may think that I think he thinks I want to shoot. And so on. "Self-Defense" is ambiguous, when one is only trying to preclude being shot in self-defense.

Sandeep Baliga and Tomas Tomas Sjöström have shown exactly how such reciprocal fear can lead to a fatal unraveling, and explored the enormous consequences of allowing for pre-play communication in the form of cheap talk. And I have previously discussed the importance of this reasoning in accounting for variations in homicide rates across time and space, as well as the effects of Stand-your-Ground laws.

There are a handful of social scientists whose impact on my own work is so profound that I can't imagine what I'd be writing if I hadn't come across their work. Among them are Glenn Loury, Elinor Ostrom, and Thomas Schelling. I can think of at least five papers: on segregation, on variations in homicide across regions and communities, on reputation in bargaining, and on social norms, that flow directly from Schelling's thought. 

It may surprise some to know that Glenn Loury's Du Bois lectures are dedicated to Schelling, but it makes perfect sense to me. Here's how Glenn explains his choice in the preface:

Shortly after arriving at Harvard in 1982 as a newly appointed Professor of Economics and of Afro-American Studies, I begin to despair of the possibility that I could successfully integrate my love of economic science with my passion for thinking broadly and writing usefully about the issue of race in contemporary America. How, I wondered, could one do rigorous theoretical work in economics while remaining relevant to an issue that seems so fraught with political, cultural and psychological dimensions? Tom Schelling not only convinced me that this was possible; he took me by the hand and showed the way. The intellectual style reflected in this book developed under his tutelage. My first insights into the problem of "racial classification" emerged in lecture halls at Harvard's Kennedy School of Government, where, for several years in the 1980s, Tom and I co-taught a course we called "Public Policies in Divided Societies." Tom Schelling's creative and playful mind, his incredible breadth of interests, and his unparalleled mastery of strategic analysis opened up a new world of intellectual possibilities for me. I will always be grateful to him.

As, indeed, will I.

Tuesday, December 13, 2016

Is Scarcity as Much About Psychology as it is Economics?

Dan Nixon at Bank Underground:

Mind over matter: is scarcity as much about psychology as it is economics?: “Unlimited wants, scarce resources”. This is the economic problem.  But once basic needs are met, how much should scarcity – having “enough” – be understood as a psychological problem? Is it possible to cultivate an “abundance mindset”? And what does all of this mean for how economics is taught?
The rise and rise of psychology in economics
Over recent decades there’s been a step change in the use of ideas from psychology in economics research.
The vast literature on behavioural economics, for example, has challenged the core assumptions of an entirely rational, self-interested account of human behaviour.  Much, too, has been written on the economics of happiness and how we might improve on GDP per capita as a measure of progress.  Even aside from research, the way we consume things (ie our economic activity) has become increasingly psychological over time:  as basic needs are met with greater ease, the argument goes, we consume “ideas” (such as information in blogs) more than “stuff”.
Far less has been written about the psychological aspect of scarcity. Yet this could have big implications, given the central role that scarcity plays in economic theory. ...

Wednesday, December 07, 2016

A Primer on Equilibrium

This is by David Glasner:

A Primer on Equilibrium: After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”
That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution. ...
That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.
Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. ...

After a long discussion, he ends with:

The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.

Wednesday, November 09, 2016

The Limitations of Randomized Controlled Trials

Angus Deaton and Nancy Cartwright:

The limitations of randomized controlled trials: In recent years, the use of randomized controlled trials has spread from labor market and welfare program evaluation to other areas of economics, and to other social sciences, perhaps most prominently in development and health economics. This column argues that some of the popularity of such trials rests on misunderstandings about what they are capable of accomplishing, and cautions against simple extrapolations from trials to other contexts. ...

Thursday, November 03, 2016

Mainstream Economics

Simon Wren-Lewis:

Ann Pettifor on mainstream economics: Ann has a article that talks about the underlying factor behind the Brexit vote. Her thesis, that it represents the discontent of those left behind by globalization, has been put forward by others. Unlike Brad DeLong, I have few problems with seeing this as a contributing factor to Brexit, because it is backed up by evidence, but like Brad DeLong I doubt it generalizes to other countries. Unfortunately her piece is spoilt by a final section that is a tirade against mainstream economists which goes way over the top. ...

Most economists have certainly encouraged the idea that globalization would increase overall prosperity, and they have been proved right. It is also true that many of these economists did not admit or stress enough that there would be losers as a result of this process who needed compensating from the increase in aggregate prosperity. But once again I doubt very much that anything would have changed if they had. And if they didn’t think enough about it in the past, they are now: see Paul De Grauwe here for example.

There is a regrettable (but understandable) tendency by heterodox economists on the left to try and pretend that economics and neoliberalism are somehow inextricably entwined. The reality is that neoliberal advocates do use some economic ideas as justification, but they ignore others which go in the opposite direction. As I often point out, many more academic economists spend their time analyzing market imperfections than trying to show markets always work on their own. They get Nobel prizes for this work. I find attempts to suggest that economics somehow helped create austerity particularly annoying, as I (and many others) have spent many blog posts showing that economic theory and evidence demonstrates that austerity was a huge mistake.

Wednesday, October 26, 2016

Being Honest about Ideological Influence in Economics

Simon Wren-Lewis:

Being honest about ideological influence in economics: Noah Smith has an article that talks about Paul Romer’s recent critique of macroeconomics. ... He says the fundamental problem with macroeconomics is lack of data, which is why disputes seem to take so long to resolve. That is not in my view the whole story.
If we look at the rise of Real Business Cycle (RBC) research a few decades ago, that was only made possible because economists chose to ignore evidence about the nature of unemployment in recessions. There is overwhelming evidence that in a recession employment declines because workers are fired rather than choosing not to work, and that the resulting increase in unemployment is involuntary (those fired would have rather retained their job at their previous wage). Both facts are incompatible with the RBC model.
In the RBC model there is no problem with recessions, and no role for policy to attempt to prevent them or bring them to an end. The business cycle fluctuations in employment they generate are entirely voluntary. RBC researchers wanted to build models of business cycles that had nothing to do with sticky prices. Yet here again the evidence was quite clear...
Why would researchers try to build models of business cycles where these cycles required no policy intervention, and ignore key evidence in doing so? The obvious explanation is ideological. I cannot prove it was ideological, but it is difficult to understand why - in an area which as Noah says suffers from a lack of data - you would choose to develop theories that ignore some of the evidence you have. The fact that, as I argue here, this bias may have expressed itself in the insistence on following a particular methodology at the expense of others does not negate the importance of that bias. ...
I suspect there is a reluctance among the majority of economists to admit that some among them may not be following the scientific method but may instead be making choices on ideological grounds. This is the essence of Romer’s critique, first in his own area of growth economics and then for business cycle analysis. Denying or marginalizing the problem simply invites critics to apply to the whole profession a criticism that only applies to a minority.

Tuesday, October 18, 2016

Yellen Poses Important Post-Great Recession Macroeconomic Questions

Nick Bunker:

Yellen poses important post-Great Recession macroeconomic questions: Last week at a Federal Reserve Bank of Boston conference, Federal Reserve Chair Janet Yellen gave a speech on macroeconomics research in the wake of the Great Recession. She ... lists four areas for research, but let’s look more closely at the first two groups of questions that she elevates.
The first is the influence of aggregate demand on aggregate supply. As Yellen notes, the traditional way of thinking about this relationship would be that demand, a short-run phenomenon, has no significant effect of aggregate supply, which determines long-run economic growth. The potential growth rate of an economy is determined by aggregate supply...
Yellen points to research that increasingly finds so called hysteresis effects in the macroeconomy. Hysteresis, a term borrowed from physics, is the idea that short-run shocks to the economy can alter its long-term trend. One example of hysteresis is workers who lose jobs in recessions and then aren’t drawn back into the labor market bur rather are permanently locked out... Interesting new research argues that hysteresis may affect not just the labor supply but also the rate of productivity growth.
If hysteresis is prevalent in the economy, then U.S. policymakers need to rethink their fiscal and monetary policy priorities. The effects of hysteresis may mean that economic recoveries need to run longer and hotter than previous thought in order to get workers back into the labor market or allow other resources to get back into full use.
The other set of open research questions that Yellen raises is the influence of “heterogeneity” on aggregate demand. In many models of the macroeconomy, households are characterized by a representative agent... In short, they are assumed to be homogeneous. As Yellen notes in her speech, overall home equity remained positive after the bursting of the housing bubble, so a representative agent would have maintained positive equity in their home.
Yet a wealth of research in the wake of the Great Recession finds that millions of households whose mortgages were “underwater” and didn’t have positive wealth—a big reason for the severity of the downturn. Ignoring this heterogeneity in the housing market and its effects on economic inequality seems like something modern macroeconomics needs to resolve. Economists are increasingly moving in this direction, but even more movement would very helpful.
Yellen raises other areas of inquiry in her speech, including better understanding how the financial system is linked to the real economy and how the dynamics of inflation are determined. ... As Paul Krugman has noted several times over the past several years, the Great Recession doesn’t seem to have provoked the same rethink of macroeconomics compared to the Great Depression, which ushered in Keynesianism, and the stagflation of the 1970s, which led to the ascendance of new classical economics. The U.S. economy is similarly dealing with a “new normal.” Macroeconomics needs to respond this reality.

Tuesday, October 11, 2016

Ricardian Equivalence, Benchmark Models, and Academics Response to the Financial Crisis

Simon Wren-Lewis:

Ricardian Equivalence, benchmark models, and academics response to the financial crisis: In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:

“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified. ...

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about... These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.

Wednesday, September 21, 2016

Trouble with Macroeconomics, Update

Paul Romer:

Trouble with Macroeconomics, Update: My new working paper, The Trouble with Macroeconomics, has generated some interesting reactions. Here are a few responses...

Monday, September 19, 2016

How to Build a Better Macroeconomics

Narayana Kocherlakota:

How to Build a Better Macroeconomics: Methodology Specifics I want to follow up on my comments about Paul Romer’s interesting recent piece by being more precise about how I believe macroeconomic research could be improved.

Macro papers typically proceed as follows:

  1. Question stated.
  2. Some reduced form analysis to "motivate" the question/answer.
  3. Question inputted into model. Model is a close variant of prior models grounded in four or five 1980s frameworks. The variant is generally based on introspection combined with some calibration of relevant parameters.
  4. Answer reported.

The problem is that the prior models have a host of key behavioral assumptions that have little or no empirical grounding. In this pdf, I describe one such behavioral assumption in some detail: the response of current consumption to persistent interest rate changes.

But there are many other such assumptions embedded in our models. For example, most macroeconomists study questions that depend crucially on how agents form expectations about the future. However, relatively few papers use evidence of any kind to inform their modeling of expectations formation. (And no, it’s not enough to say that Tom Sargent studied the consequences of one particular kind of learning in the late 1980s!) The point is that if your paper poses a question that depends on how agents form expectations, you should provide evidence from experimental or micro-econometric sources to justify your approach to expectation formation in your particular context.

So, I suggest the following would be a better approach:

  1. Question
  2. Thorough theoretical analysis of key mechanisms/responses that are likely to inform answer to question (perhaps via "toy" models?)
  3. Find evidence for ranges of magnitudes of relevant mechanisms/responses.
  4. Build and evaluate a range of models informed by this evidence. (Identification limitations are likely to mean that, given available data, there is a range of models that will be relevant in addressing most questions.)
  5. Range of answers to (1), given (4).

Should all this be done in one paper? Probably not. I suspect that we need a more collaborative approach to our questions - a team works on (2), another team works on (3), a third team works on (4) and we arrive at (5). I could readily see each step as being viewed as valuable contributions to economic science.

In terms of (3) - evidence - our micro colleagues can be a great source on this dimension. In my view, the most useful labor supply paper for macroeconomists in the past thirty years is this one - and it’s not written by a macroeconomist.

(If people know of existing papers that follow this approach, feel free to email me a reference at [email protected].)

None of these ideas are original to me. They were actually exposited nearly forty years ago.

The central idea is that individual responses can be documented relatively cheaply, occasionally by direct experimentation, but more commonly by means of the vast number of well-documented instances of individual reactions to well-specified environmental changes made available "naturally" via censuses, panels, other surveys, and the (inappropriately maligned as "casual empiricism") method of keeping one's eyes open.

I’m not totally on board with the author in what he says here. I'm a lot less enthralled by the value of “casual empiricism” in a world in which most macroeconomists mainly spend their time with other economists, but otherwise agree wholeheartedly with these words. And I probably see more of a role for direct experimentation than the author does. But those are both quibbles.

And these words from the same article seem even more apt:

Researchers … will appreciate the extent to which … [this agenda] describes hopes for the future, not past accomplishments. These hopes might, without strain, be described as hopes for a kind of unification, not dissimilar in spirit from the hope for unification which informed the neoclassical synthesis. What I have tried to do above is to stress the empirical (as opposed to the aesthetic) character of these hopes, to try to understand how such quantitative evidence about behavior as we may reasonably expect to obtain in society as it now exists might conceivably be transformed into quantitative information about the behavior of imagined societies, different in important ways from any which have ever existed. This may seem an intimidatingly ambitious way to state the goal of an applied subfield of a marginally respectable science, but is there a less ambitious way of describing the goal of business cycle theory?

Somehow, macroeconomists have gotten derailed from this vision of a micro-founded unification and retreated into a hermetically sealed world, where past papers rely on prior papers' flawed foundations. We need to get back to the ambitious agenda that Robert Lucas put before us so many years ago.

(I admit that I'm cherry-picking like crazy from Lucas' 1980 classic JMCB piece. For example, one of Lucas' main points in that article was that he distrusted disequilibrium modeling approaches because they gave rise to too many free parameters. I don't find that argument all that relevant in 2016 - I think that we know more now than in 1980 about how micro-data can be fruitfully used to discipline our modeling of firm behavior. And I would suspect that Lucas would be less than fully supportive of what I write about expectations - but I still think I'm right!)

Thursday, September 15, 2016

We Need to Move on from Existing Theories of the Economy

Dave Elder-Vass at Understanding Society:

Guest post by Dave Elder-Vass: [Dave Elder-Vass accepted my invitation to write a response to my discussion of his recent book, Profit and Gift in the Digital Economy (link). Elder-Vass is Reader in sociology at Loughborough University and author as well of The Causal Power of Social Structures: Emergence, Structure and Agency and The Reality of Social Construction, discussed here and here. Dave has emerged as a leading voice in the philosophy of social science, especially in the context of continuing developments in the theory of critical realism. Thanks, Dave!]

We need to move on from existing theories of the economy

Let me begin by thanking Dan Little for his very perceptive review of my book Profit and Gift in the Digital Economy. As he rightly says, it’s more ambitious than the title might suggest, proposing that we should see our economy not simply as a capitalist market system but as a collection of “many distinct but interconnected practices”. Neither the traditional economist’s focus on firms in markets nor the Marxist political economist’s focus on exploitation of wage labour by capital is a viable way of understanding the real economy, and the book takes some steps towards an alternative view.

Both of those perspectives have come to narrow our view of the economy in multiple dimensions. Our very concept of the economy has been derived from the tradition that began as political economy with Ricardo and Smith then divided into the Marxist and neoclassical traditions (of course there are also others, but they are less influential). Although these conflict radically in some respects they also share some problematic assumptions, and in particular the assumption that the contemporary economy is essentially a capitalist market economy, characterised by the production of commodities for sale by businesses employing labour and capital. As Gibson-Graham argued brilliantly in their book The End Of Capitalism (As We Knew It): A Feminist Critique of Political Economy, ideas seep into the ways in which we frame the world, and when the dominant ideas and the main challengers agree on a particular framing of the world it is particularly difficult for us to think outside of the resulting box. In this case, the consequence is that even critics find it difficult to avoid thinking of the economy in market-saturated terms.

The most striking problem that results from this (and one that Gibson-Graham also identified) is that we come to think that only this form of economy is really viable in our present circumstances. Alternatives are pie in the sky, utopian fantasies, which could never work, and so we must be content with some version of capitalism – until we become so disillusioned that we call for its complete overthrow, and assume that some vague label for a better system can be made real and worthwhile by whoever leads the charge on the Bastille. But we need not go down either of these paths once we recognise that the dominant discourses are wrong about the economy we already have.

To see that, we need to start defining the economy in functional terms: economic practices are those that produce and transfer things that people need, whether or not they are bought and sold. As soon as we do that, it becomes apparent that we are surrounded by non-market economic practices already. The book highlights digital gifts – all those web pages that we load without payment, Wikipedia’s free encyclopaedia pages, and open source software, for example. But in some respects these pale into insignificance next to the household and family economy, in which we constantly produce things for each other and transfer them without payment. Charities, volunteering and in many jurisdictions the donation of blood and organs are other examples.

If we are already surrounded by such practices, and if they are proliferating in the most dynamic new areas of our economy, the idea that they are unworkably utopian becomes rather ridiculous. We can then start to ask questions about what forms of organising are more desirable ethically. Here the dominant traditions are equally warped. Each has a standard argument that is trotted out at every opportunity to answer ethical questions, but in reality both standard arguments operate as means of suppressing ethical discussions about economic questions. And both are derived from an extraordinarily narrow theory of how the economy works.

For the mainstream tradition, there is one central mechanism in the economy: price equilibration in the markets, a process in which prices rise and fall to bring demand and supply into balance. If we add on an enormous list of tenuous assumptions (which economists generally admit are unjustified, and then continue to use anyway), this leads to the theory of Pareto optimality of market outcomes: the argument that if we used some other system for allocating economic benefits some people would necessarily be worse off. This in turn becomes the central justification for leaving allocation to the market (and eliminating ‘interference’ with the market).

There are many reasons why this argument is flawed. Let me mention just one. If even one market is not perfectly competitive, but instead is dominated by a monopolist or partial monopolist, then even by the standards of economists a market system does not deliver Pareto optimality, and an alternative system might be more efficient. And in practice capitalists constantly strive to create monopolies, and frequently succeed! Even the Financial Times recognises this: in today’s issue (Sep 15 2016) Philip Stevens argues, “Once in a while capitalism has to be rescued from the depredations of, well, capitalists. Unconstrained, enterprise curdles into monopoly, innovation into rent-seeking. Today’s swashbuckling “disrupters” set up tomorrow’s cosy cartels. Capitalism works when someone enforces competition; and successful capitalists do not much like competition”.

So the argument for Pareto optimality of real market systems is patently false, but it continues to be trotted out constantly. It is presented as if it provides an ethical justification for the market economy, but its real function is to suppress discussion of economic ethics: if the market is inherently good for everyone then, it seems, we don’t need to worry about the ethics of who gets what any more.

The Marxist tradition likewise sees one central mechanism in the economy: the extraction of surplus from wage labour by capitalists. Their analysis of this mechanism depends on the labour theory of value, which is no more tenable that mainstream theories of Pareto optimality (for reasons I discuss in the book). Marxists consistently argue as if any such extraction is ethically reprehensible. Marx himself never provides an ethical justification for such a view. On the contrary, he claims that this is a scientific argument and disowns any ethical intent. Yet it functions in just the same way as the argument for Pareto optimality: instead of encouraging ethical debate about who should get what in the economy, Marxists reduce economic ethics to the single question of the need to prevent exploitation (narrowly conceived) of productive workers.

We need to sweep away both of these apologetics, and recognise that questions of who gets what are ethical issues that are fundamental to justice, legitimacy, and political progress in contemporary societies. And that they are questions that don’t have easy ‘one argument fits all’ answers. To make progress on them we will have to make arguments about what people need and deserve that recognise the complexity of their social situations. But it doesn’t take a great deal of ethical sophistication to recognise that the 1% have too much when many in the lower deciles are seriously impoverished, and that the forms of impoverishment extend well beyond underpaying for productive labour.

I’m afraid that I have written much more than I intended to, and still said very little about the steps I’ve taken in the book towards a more open and plausible way of theorising how the economy works. I hope that I’ve at least added some more depth to the reasons Dan picked out for attempting that task.

Monday, September 05, 2016

Capitalism as a Heterogeneous Set of Practices

Sociologists on economics. This is by Daniel Little:

Capitalism as a heterogeneous set of practices: A key part of understanding society is making sense of the "economy" in which we live. But what is an economy? Existing economic theories attempt to answer this question with simple unified theories. The economy is a profit-driven market system of firms, workers, and consumers. The economy is a property system dependent upon the expropriation of surplus labor. The economy is a system of expropriation more or less designed to create great inequalities of income, wealth, and well-being. The economy is a system of exploitation and domination.

In Profit and Gift in the Digital Economy Dave Elder-Vass argues that these simple theories, largely the product of the nineteenth century, are flawed in several fundamental ways. First, they are all simple and unitary in a heterogeneous world. Economic transactions take a very wide variety of forms in the modern world. But more fundamentally, these existing theories fail completely to provide a theoretical vocabulary for describing what are now enormously important parts of our economic lives. One well-know blindspot is the domestic economy -- work and consumption within the household. But even more striking is the inadequacy of existing economic theories to make sense of the new digital world -- Google, Apple, Wikipedia, blogging, or YouTube. Elder-Vass's current book offers a new way of thinking about our economic lives and institutions. And he believes that this new way lays a basis for more productive thinking about a more humane future for all of us than is offered by either neoliberalism or Marxism.

What E-V offers is the idea of economic life as a jumble of "appropriative" practices -- practices that get organized and deployed in different combinations, and that have better and worse implications for human well-being.

From this perspective it becomes possible to see our economy as a complex ecosystem of competing and interacting economic forms, each with their own strengths and weaknesses, and to develop a progressive politics that seems to reshape that ecosystem rather than pursuing the imaginary perfection of one single universal economic form. (5)
The argument here is that we can understand the economy better by seeing it as a diverse collection of economic forms, each of which can be characterised as a particular complex of appropriative practices -- social practices that influence the allocation of benefits from the process of production. (9)
Economies are not monoliths but diverse mixtures of varying economic forms. To understand and evaluate economic phenomena, then, we need to be able to describe and analyse these varying forms in conjunction with each other. (96)

Capitalism is not a single, unitary "mode of production," but rather a concatenation of multiple forms and practices. E-V believes that the positions offered here align well with the theories of critical realism that he has helped to elaborate in earlier books (19-20) (link, link). We can be realist in our investigations of the causal properties of the economic practices he identifies.

This way of thinking about economic life is very consistent with several streams of thought in Understanding Society -- the idea of social heterogeneity (link), the idea of assemblage (link), and a background mistrust of comprehensive social theories (link). (Here is an earlier post on "Capitalism 2.0" that is also relevant to the perspective and issues Elder-Vass brings forward; link.)

The central new element in contemporary economic life that needs treatment by an adequate political economy is the role that large digital enterprises play in the contemporary world. These enterprises deal in intangible products; they often involve a vast proportion of algorithmic transformation rather than human labor; and to a degree unprecedented in economic history, they depend on "gift" transactions at every level. Internet companies like Google give free search and maps, and bloggers and videographers give free content. And yet these gifts have none of the attributes of traditional gift communities -- there is no community, no explicit reciprocity, and little face-to-face interaction. E-V goes into substantial detail on several of these new types of enterprises, and does the work of identifying the "economic practices" upon which they depend.

In particular, E-V considers whether the gift relation familiar from anthropologists like Marcel Mauss and economic sociologists like Karl Polanyi can shed useful light on the digital economy. But the lack of reciprocity and face-to-face community leads him to conclude that the theory is unpersuasive as a way of understanding the digital economy (86).

It is noteworthy that E-V's description of appropriative practices is primarily allocative; it pays little attention to the organization of production. It is about "who receives the benefits" (10) but not so much about "how activity and labor are coordinated, managed, and deployed to produce the stuff". Marx gained the greatest insights in Capital, not from the simple mathematics of the labor theory of value, but from his investigations of the conditions of work and the schemes of management to which labor was subject in the nineteenth-century factory. The ideas of alienation, domination, and exploitation are very easy to understand in that context. But it would seem that there are similar questions to ask about the digital economy shops of today. The New York Times' reportage of working conditions within the Amazon organization seems to reveal a very similar logic (link). And how about the high-tech sweat shops described in a 2009 Bloomberg investigation (link)?

Elder-Vass believes that a better understanding of our existing economic practices can give rise to a more effective set of strategies for creating a better future. E-V's vision for creating a better future depends on a selective pruning of the more destructive practices and cultivation of the more positive practices. He is appreciative of the "real utopias" project (36) (link) and also of the World Social Forum.

This means growing some progressive alternatives but also cutting back some regressive ones. It entails being open to a wide range of alternatives, including the possibility that there might be some valuable continuing role for some forms of capitalism in a more adequate mixed economy of practices. (15)

Or in other words, E-V advocates for innovative social change -- recognizing the potential in new forms and cultivating existing forms of economic activity. Marxism has been the impetus of much thinking about progressive change in the past century; but E-V argues that this perspective too is limited:

Marxism itself has become an obstacle to thinking creatively about the economy, not least because it is complicit in the discourse of the monolithic capitalist market economy that we must now move beyond.... Marx's labour theory of value ... tends to support the obsessive identification of capitalism with wage labour. As a consequence Marxists have failed to recognise that capitalism has developed new forms of making profit that do not fit with the classic Marxist model, including many that have emerged and prospered in the new digital economy. (45)

This is not a wholesale rejection of Marx's thought; but it is a well-justified critique of the lingering dogmatism of this tradition. Though E-V does not make reference to current British politics in the book, these comments seem very appropriate in appraisal of the approach to change championed by Labour leader Jeremy Corbyn.

E-V shows a remarkable range of expertise in this work. His command of recent Marxian thinking about contemporary capitalism is deep. But he has also gone deeply into the actual practices of the digital economy -- the ways Google makes profits, the incentives and regulations that sustain wikipedia, the handful of distinctive business practices that have made Apple one of the world's largest companies. The book is a work of theory and a work of empirical investigation as well.

Profit and Gift in the Digital Economy is a book with a big and important idea -- bigger really than the title implies. The book demands a substantial shift in the way that economists think about the institutions and practices through which the global economy works. More fundamentally, it asks that we reconsider the idea of "economy" altogether, and abandon the notion that there is a single unitary economic practice or institution that defines modern capitalism -- whether market, wage labor, or trading system. Instead, we should focus on the many distinct but interconnected practices that have been invented and stitched together in the many parts of society to solve particular problems of production, consumption, and appropriation, and that as an aggregate make up "the economy". The economy is an assemblage, not a designed system, and reforming this agglomeration requires shifting the "ecosystem" of practices in a direction more favorable to human flourishing.

Sunday, September 04, 2016

Telling Macro Stories with Micro

Claudia Sahm:

Telling macro stories with micro: Don't let the equations, data, or jargon fool you, economists are avid storytellers. Our "stories" may not fit neatly in the seven universal plots but after awhile it's easy to spot some patterns. A good story paper in economics, according to David Romer, has three characteristics: a viewpoint, a lever, and a result.
Most blog or media coverage of an economics paper focuses on the result. Makes sense given the audience but buyer beware. Economists dissecting a paper spend more time on the lever, the how-did-they-get-the-result part. And coming up with new levers is a big chunk of research. The viewpoint--the underlying assumptions, the what's-central-to-the-story--tends to get short shrift. Of course, the viewpoint matters (often that's what defines a a story as economics), but it usually holds across many papers. Best to focus the new stuff.
Except when the viewpoint comes under scrutiny, then the stories can really change. ...

How much does micro matter for macro?

One long-standing viewpoint in economics is that changes in the macro-economy can largely be understood by studying changes in macro aggregates. Ironically, this viewpoint even survived macro's push to micro foundations with a "representative agent" stepping in as the missing link between aggregate data and micro theory. As a macro forecaster, I understand the value of the aggregates-only simplification. As an applied micro researcher, I am pretty sure it fails us from time to time. Thankfully, an ever-growing body of research and commentary is helping to identify times when differences at the micro level are relevant for macro outcomes. This is not new--issues of aggregation in macro go waaay back--but our levers, with rich, timely micro data and high-powered computation, are improving rapidly.

I focus in this post on differences in household behavior, particularly related to consumer spending, since that's the area I know best. And I want to discuss results from an ambitious new paper: "Macroeconomics and Household Heterogeneity" by Krueger, Mitman, and Perri. tldr: I am skeptical of their results, above all, the empirics, but I really like what they are trying to do, to shift the macro viewpoint. More on this paper below, but also want to set it in the context of macro storytelling. ...

There's quite a bit more.

Monday, August 29, 2016

Complexity and Economic Policy

Alan Kirman:

Complexity and Economic Policy, OECD Insights: ...Economic theory has... developed increasingly sophisticated models to justify the contention that individuals left to their own devices will self organise into a socially desirable state.  However, in so doing, it has led us to a view of the economic system that is at odds with what has been happening in many other disciplines.
Although in fields such as statistical physics, ecology and social psychology it is now widely accepted that systems of interacting individuals will not have the sort of behaviour that corresponds to that of one average or typical  particle or individual, this has not had much effect on economics. Whilst those disciplines moved on to study the emergence of non-linear dynamics as a result of the complex interaction between individuals, economists relentlessly insisted on basing their analysis on that of rational optimising individuals behaving as if they were acting in isolation. ...
Yet this paradigm is neither validated by empirical evidence nor does it have sound theoretical foundations. It has become an assumption. ...
As soon as one considers the economy as a complex adaptive system in which the aggregate behaviour emerges from the interaction between its components, no simple relation between the individual participant and the aggregate can be established. Because of all the interactions and the complicated feedbacks between the actions of the individuals and the behaviour of the system there will inevitably be “unforeseen consequences” of the actions taken by individuals, firms and governments. Not only the individuals themselves but the network that links them changes over time. The evolution of such systems is intrinsically difficult to predict, and for policymakers this means that assertions such as “this measure will cause that outcome” have to be replaced with “a number of outcomes are possible and our best estimates of the probabilities of those outcomes at the current point are…”. ...
...in trying to stabilise such systems it is an error to focus on one variable either to control the system or to inform us about its evolution. Single variables such as the interest rate do not permit sufficient flexibility for policy actions and single performance measures such as the unemployment rate or GDP convey too little information about the state of the economy.

Tuesday, August 16, 2016

General Equilibrium Theory: Sound and Fury, Signifying Nothing?

A relatively long article by Raphaële Chappe at INET:

General Equilibrium Theory: Sound and Fury, Signifying Nothing?: Does general equilibrium theory sufficiently enhance our understanding of the economic process to make the entire exercise worthwhile, if we consider that other forms of thinking may have been ‘crowded out’ as a result of its being the ‘dominant discourse’? What, in the end, have we really learned from it? ...

Monday, August 15, 2016

What's Useful about DSGE Models?

George Evans responds to the recent discussion on the usefulness of DSGE models:

Here is what I like and have found most useful about Dynamic Stochastic General Equilibrium (DSGE) models, also known as New Keynesian (NK), models. The original NK models were low dimensional – the simplest version reduces to a 3-equation model, while DSGE models are now typically much more elaborate. What I find attractive about these models can be stated in terms of the basic NK/DSGE model.
First, because it is a carefully developed, micro- founded model incorporating price frictions, the NK model makes it possible to incorporate in a disciplined way the various additional sectors, distortions, adjustment costs, and parametric detail found in many NK/DSGE models. Theoretically this is much more attractive than starting with a reduced form IS-LM model and adding features in an ad hoc way. (At the same time I still find ad hoc models useful, especially for teaching and informal policy analysis, and the IS-LM model is part of the macroeconomics cannon).
Second, and this is particularly important for my own research, the NK model makes explicit and gives a central role to expectations about future economic variables. The standard linearized three-equation NK model in output, inflation and interest rates has current output and inflation depending in a specified way on expected future output and inflation. The dependence of output on expected future output and future inflation comes through the household dynamic optimization condition, and the dependence of inflation on expected future inflation arises from the firm’s optimal pricing equation. The NK model thus places expectations of future economic variables front and center, and does so in a disciplined way.
Third, while the NK model is typically solved under rational expectations (RE), it can also be viewed as providing the temporary equilibrium framework for studying the system under relaxations of the RE hypothesis. I particularly favor replacing RE with boundedly rational adaptive learning and decision-making (AL). Incorporating AL is especially fruitful in cases where there are multiple RE solutions, and AL brings out many Keynesian features of the NK model that extend IS-LM. In general I have found micro-founded macro models of all types to be ideal for incorporating bounded rationality, which is most naturally formulated at the agent level.
Fourth, while the profession as a whole seemed to many of us slow to appreciate the implications of the NK model for policy during and following the financial crisis, this was not because the NK model was intrinsically defective (the neglect of financial frictions by most, though not all DSGE modelers, was also a deficiency in most textbook IS-LM models). This was really, I think, because many macro economists using NK models in 2007-8 did not fully appreciate the Keynesian mechanisms present in the model.
However, many of us were alert to the NK model fiscal policy implications during the crisis. For example, in Evans, Guse and Honkapohja (“Liquidity traps, learning and stagnation,” 2008, European Economic Review), using an NK model with multiple RE solutions because of the liquidity trap, we showed, using the AL approach to expectations, that when there is a very large negative expectations shock, fiscal as well as monetary stimulus may be needed, and indeed a temporary fiscal stimulus that is large enough and early enough may be critical for avoiding a severe recession or depression. Of course such an argument could have been made using extensions of the ad hoc IS-LM model, but my point is that this policy implication was ready to be found in the NK model, and the key results center on the primacy of expectations.
Finally, it should go without saying that NK/DSGE modeling should not be the one and only style. Most graduate-level core macro courses teach a wide range of macro models, and I see a diversity of innovations at the research frontier that will continue to keep macroeconomics vibrant and relevant.

Sunday, June 19, 2016

If the Modifications Needed to Accommodate New Observations Become Too Baroque ...

The New Keynesian model is fairly pliable, and adding bells and whistles can help it to explain most of the data we see, at least after the fact. Does that mean we should be more confident in it its ability to "embody any useful principle," or less?:

... A famous example of different pictures of reality is the model introduced around AD 150 by Ptolemy (ca. 85—ca. 165) to describe the motion of the celestial bodies. ... In Ptolemy’s model the earth stood still at the center and the planets and the stars moved around it in complicated orbits involving epicycles, like wheels on wheels. ...

It was not until 1543 that an alternative model was put forward by Copernicus... Copernicus, like Aristarchus some seventeen centuries earlier, described a world in which the sun was at rest and the planets revolved around it in circular orbits. ...

So which is real, the Ptolemaic or Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true..., our observations of the heavens can be explained by assuming either the earth or the sun to be at rest. Despite its role in philosophical debates over the nature of our universe, the real advantage of the Copernican system is simply that the equations of motion are much simpler in the frame of reference in which the sun is at rest.

... Elegance ... is not something easily measured, but it is highly prized among scientists because laws of nature are meant to economically compress a number of particular cases into one simple formula. Elegance refers to the form of a theory, but it is closely related to a lack of adjustable elements, since a theory jammed with fudge factors is not very elegant. To paraphrase Einstein, a theory should be as simple as possible, but not simpler. Ptolemy added epicycles to the circular orbits of the heavenly bodies in order that his model might accurately describe their motion. The model could have been made more accurate by adding epicycles to the epicycles, or even epicycles to those. Though added complexity could make the model more accurate, scientists view a model that is contorted to match a specific set of observations as unsatisfying, more of a catalog of data than a theory likely to embody any useful principle. ...
[S]cientists are always impressed when new and stunning predictions prove correct. On the other hand, when a model is found lacking, a common reaction is to say the experiment was wrong. If that doesn’t prove to be the case, people still often don’t abandon the model but instead attempt to save it through modifications. Although physicists are indeed tenacious in their attempts to rescue theories they admire, the tendency to modify a theory fades to the degree that the alterations become artificial or cumbersome, and therefore “inelegant.” If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model. ...
[Hawking, Stephen; Mlodinow, Leonard (2010-09-07). The Grand Design. Random House, Inc.. Kindle Edition.]

Wednesday, January 13, 2016

'Is Mainstream Academic Macroeconomics Eclectic?'

Simon Wren-Lewis:

Is mainstream academic macroeconomics eclectic?: For economists, and those interested in macroeconomics as a discipline
Eric Lonergan has a short little post that is well worth reading..., it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.
If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. ... My answer is yes and no.
Let’s take the five ‘schools’ that Eric talks about. ... Indeed the variety of models that academic macro currently uses is far wider than this.
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really... This is because these models (unlike those of 40+ years ago) use a common language. ...
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).
It also means that the range of issues that models (DSGE models) can address is also huge. ...
The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.
You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.
I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world.  ...
The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.

Friday, January 08, 2016

Economics Rules: The Rights and Wrongs of the Dismal Science (Video)

Sunday, January 03, 2016

'Musings on Whether We Consciously Know More or Less than What Is in Our Models…'

Brad DeLong:

Musings on Whether We Consciously Know More or Less than What Is in Our Models…: Larry Summers presents as an example of his contention that we know more than is in our models–that our models are more a filing system, and more a way of efficiently conveying part of what we know, than they are an idea-generating mechanism–Paul Krugman’s Mundell-Fleming lecture, and its contention that floating exchange-rate countries that can borrow in their own currency should not fear capital flight in a utility trap. He points to Olivier Blanchard et al.’s empirical finding that capital outflows do indeed appear to be not expansionary but contractionary ...

[There's quite a bit more in Brad's post.]

Wednesday, December 30, 2015

'The Neo-Fisherian View and the Macro Learning Approach'

I asked my colleagues George Evans and Bruce McGough if they would like to respond to a recent post by Simon Wren-Lewis, "Woodford’s reflexive equilibrium" approach to learning:

The neo-Fisherian view and the macro learning approach
George W. Evans and Bruce McGough
Economics Department, University of Oregon
December 30, 2015

Cochrane (2015) argues that low interest rates are deflationary — a view that is sometimes called neo-Fisherian. In this paper John Cochrane argues that raising the interest rate and pegging it at a higher level will raise the inflation rate in accordance with the Fisher equation, and works through the details of this in a New Keynesian model.

Garcia-Schmidt and Woodford (2015) argue that the neo-Fisherian claim is incorrect and that low interest rates are both expansionary and inflationary. In making this argument Mariana Garcia-Schmidt and Michael Woodford use an approach that has a lot of common ground with the macro learning literature, which focuses on how economic agents might come to form expectations, and in particular whether coordination on a particular rational expectations equilibrium (REE) is plausible. This literature examines the stability of an REE under learning and has found that interest-rate pegs of the type discussed by Cochrane lead to REE that are not stable under learning. Garcia-Schmidt and Woodford (2015) obtain an analogous instability result using a new bounded-rationality approach that provides specific predictions for monetary policy. There are novel methodological and policy results in the Garcia-Schmidt and Woodford (2015) paper. However, we will here focus on the common ground with other papers in the learning literature that also argue against the neo-Fisherian claim.

The macro learning literature posits that agents start with boundedly rational expectations e.g. based on possibly non-RE forecasting rules. These expectations are incorporated into a “temporary equilibrium” (TE) environment that yields the model’s endogenous outcomes. The TE environment has two essential components: a decision-theoretic framework which specifies the decisions made by agents (households, firms etc.) given their states (values of exogenous and pre-determined endogenous state variables) and expectations;1 and a market-clearing framework that coordinates the agents’ decisions and determines the values of the model’s endogenous variables. It is useful to observe that, taken together, the two components of the TE environment yield the “TE-map” that takes expectations and (aggregate and idiosyncratic) states to outcomes.

The adaptive learning framework, which is the most popular formulation of learning in macro, proceeds recursively. Agents revise their forecast rules in light of the data realized in the previous period, e.g. by updating their forecast rules econometrically. The exogenous shocks are then realized, expectations are formed, and a new temporary equilibrium results. The equilibrium path under learning is defined recursively. One can then study whether the economy under adaptive learning converges over time to the REE of interest.2

The essential point of the learning literature is that an REE, to be credible, needs an explanation for how economic agents come to coordinate on it. This point is acute in models in which there are multiple RE solutions, as can arise in a wide range of dynamic macro models. This has been an issue in particular in the New Keynesian model, but it also arises, for example, in overlapping generations models and in RBC models with distortions. The macro learning literature provides a theory for how agents might learn over time to forecast rationally, i.e. to come to have RE (rational expectations). The adaptive learning approach found that agents will over time come to have rational expectations (RE) by updating their econometric forecasting models provided the REE satisfies “expectational stability” (E-stability) conditions. If these conditions are not satisfied then convergence to the REE will not occur and hence it is implausible that agents would be able to coordinate on the REE. E-stability then also acts as a selection device in cases in which there are multiple REE.

The adaptive learning approach has the attractive feature that the degree of rationality of the agents is natural: though agents are boundedly rational, they are still fairly sophisticated, estimating and updating their forecasting models using statistical learning schemes. For a wide range of models this gives plausible results. For example, in the basic Muth cobweb model, the REE is learnable if supply and demand have their usual slopes; however, the REE, though still unique, is not learnable if the demand curve is upward sloping and steeper than the supply curve. In an overlapping generations model, Lucas (1986) used an adaptive learning scheme to show that though the overlapping generations model of money has multiple REE, learning dynamics converge to the monetary steady state, not to the autarky solution. Early analytical adaptive learning results were obtained in Bray and Savin (1986) and the formal framework was greatly extended in Marcet and Sargent (1989). The book by Evans and Honkapohja (2001) develops the E-stability principle and includes many applications. Many more applications of adaptive learning have been published over the last fifteen years.

There are other approaches to learning in macro that have a related theoretical motivation, e.g. the “eductive” approach of Guesnerie asks whether mental reasoning by hyper-rational agents, with common knowledge of the structure and of the rationality of other agents, will lead to coordination on an REE. A fair amount is known about the connections between the stability conditions of the alternative adaptive and eductive learning approaches.3 The Garcia-Schmidt and Woodford (2015) “reflective equilibrium” concept provides a new approach that draws on both the adaptive and eductive strands as well as on the “calculation equilibrium” learning model of Evans and Ramey (1992, 1995, 1998). These connections are outlined in Section 2 of Garcia-Schmidt and Woodford (2015).4

The key insight of these various learning approaches is that one cannot simply take RE (which in the nonstochastic case reduces to PF, i.e. perfect foresight) as given. An REE is an equilibrium that begs an explanation for how it can be attained. The various learning approaches rely on a temporary equilibrium framework, outlined above, which goes back to Hicks (1946). A big advantage of the TE framework, when developed at the agent level and aggregated, is that in conjunction with the learning model an explicit causal story can be developed for how the economy evolves over time.

The lack of a TE or learning framework in Cochrane (2011, 2015) is a critical omission. Cochrane (2009) criticized the Taylor principle in NK models as requiring implausible assumptions on what the Fed would do to enforce its desired equilibrium path; however, this view simply reflects the lack of a learning perspective. McCallum (2009) argued that for a monetary rule satisfying the Taylor principle the usual RE solution used by NK modelers is stable under adaptive learning, while the non-fundamental solution bubble solution is not. Cochrane (2009, 2011) claimed that these results hinged on the observability of shocks. In our paper “Observability and Equilibrium Selection,” Evans and McGough (2015b), we develop the theory of adaptive learning when fundamental shocks are unobservable, and then, as a central application, we consider the flexible-price NK model used by Cochrane and McCallum in their debate. We carefully develop this application using an agent-level temporary equilibrium approach and closing the model under adaptive learning. We find that if the Taylor principle is satisfied, then the usual solution is robustly stable under learning, while the non-fundamental price-level bubble solution is not. Adaptive learning thus operates as a selection criterion and it singles out the usual RE solution adopted by proponents of the NK model. Furthermore, when monetary policy does not obey the Taylor principle then neither of the solutions is robustly stable under learning; an interest-rate peg is an extreme form of such a policy, and the adaptive learning perspective cautions that this will lead to instability. We discuss this further below.

The agent-level/adaptive learning approach used in Evans and McGough (2015b) allows us to specifically address several points raised by Cochrane. He is concerned that there is no causal mechanism that pins down prices. The TE map provides this, in the usual way, through market clearing given expectations of future variables. Cochrane also states that the lack of a mechanism means that the NK paradigm requires that the policymakers be interpreted as threatening to “blow up” the economy if the standard solution is not selected by agents.5 This is not the case. As we say in our paper (p. 24-5), “inflation is determined in temporary equilibrium, based on expectations that are revised over time in response to observed data. Threats by the Fed are neither made nor needed ... [agents simply] make forecasts the same way that time-series econometricians typically forecast: by estimating least-squares projections of the variables being forecasted on the relevant observables.”

Let us now return to the issue of interest rate pegs and the impact of changing the level of an interest rate peg. The central adaptive learning result is that interest rate pegs give REE that are unstable under learning. This result was first given in Howitt (1992). A complementary result was given in Evans and Honkapohja (2003) for time-varying interest rate pegs designed to optimally respond to fundamental shocks. As discussed above, Evans and McGough (2015b) show that the instability result also obtains when the fundamental shocks are not observable and the Taylor principle is not satisfied. The economic intuition in the NK model is very strong and is essentially as follows. Suppose we are at an REE (or PFE) at a fixed interest rate and with expected inflation at the level dictated by the Fisher equation. Suppose that there is a small increase in expected inflation. With a fixed nominal interest rate this leads to a lower real interest rate, which increases aggregate demand and output. This in turn leads to higher inflation, which under adaptive learning leads to higher expected inflation, destabilizing the system. (The details of the evolution of expectations and the model dynamics depend, of course, on the precise decision rules and econometric forecasting model used by agents). In an analogous way, expected inflation slightly lower than the REE/PFE level leads to cumulatively lower levels of inflation, output and expected inflation.

Returning to the NK model, additional insight is obtained by considering a nonlinear NK model with a global Taylor rule that leads to two steady states. This model was studied by Benhabib, Schmidt-Grohe and Uribe in a series of papers, e.g. Benhabib, Schmitt-Grohe, and Uribe (2001), which show that with an interest-rate rule following the Taylor principle at the target inflation rate, the zero-lower bound (ZLB) on interest rates implies the existence of an unintended PFE low inflation or deflation steady state (and indeed a continuum of PFE paths to it) at which the Taylor principle does not hold (a special case of which is a local interest rate peg at the ZLB). From a PF/RE viewpoint these are all valid solutions. From the adaptive learning perspective, however, they differ in terms of stability. Evans, Guse, and Honkapohja (2008) and Benhabib, Evans, and Honkapohja (2014) show that the targeted steady state is locally stable under learning with a large basin of attraction, while the unintended low inflation/deflation steady state is not locally stable under learning: small deviations from it lead either back to the targeted steady state or into a deflation trap, in which inflation and output fall over time. From a learning viewpoint this deflation trap should be a major concern for policy.6,7

Finally, let us return to Cochrane (2015). Cochrane points out that at the ZLB peg there has been low but relatively steady (or gently declining) inflation in the US, rather than a serious deflationary spiral. This point echoes Jim Bullard’s concern in Bullard (2010) about the adaptive learning instability result: we effectively have an interest rate peg at the ZLB but we seem to have a fairly stable inflation rate, so does this indicate that the learning literature may here be on the wrong track?

This issue is addressed by Evans, Honkapohja, and Mitra (2015) (EHM2015). They first point out that from a policy viewpoint the major concern at the ZLB has not been low inflation or deflation per se. Instead it is its association with low levels of aggregate output, high levels of unemployment and a more general stagnation. However, the deflation steady state at the ZLB in the NK model has virtually the same level of aggregate output as the targeted steady state. The PFE at the ZLB interest rate peg is not a low level output equilibrium, and if we were in that equilibrium there would not be the concern that policy-makers have shown. (Temporary discount rate or credit market shocks of course can lead to recession at the ZLB but their low output effects vanish as soon as the shocks vanish).

In EHM2015 steady mild deflation is consistent with low output and stagnation at the ZLB.8 They note that many commentators have remarked that the behavior of the NK Phillips relation is different from standard theory at very low output levels. EHM2015 therefore imposes lower bounds on inflation and consumption, which can become relevant when agents become sufficiently pessimistic. If the inflation lower bound is below the unintended low steady state inflation rate, a third “stagnation” steady state is created at the ZLB. The stagnation steady state, like the targeted steady state is locally stable under learning, and arises under learning if output and inflation expectations are too pessimistic. A large temporary fiscal stimulus can dislodge the economy from the stagnation trap, and a smaller stimulus can be sufficient if applied earlier. Raising interest rates does not help in the stagnation state and at an early stage it can push the economy into the stagnation trap.

In summary, the learning approach argues forcefully against the neo- Fisherian view.

Footnotes

1With infinitely-lived agents there are several natural implementations of optimizing decision rules, including short-horizon Euler-equation or shadow-price learning approaches(see, e.g., Evans and Honkapohja (2006) and Evans and McGough (2015a)) and the anticipated utility or infinte-horizon approaches of Preston (2005) and Eusepi and Preston (2010).

2An additional advantage of using learning is that learning dynamics give expanded scope for fitting the data as well as explaining experimental findings.

3The TE map is the basis for the map at the core of any specified learning scheme, which in turn determines the associated stability conditions.

4There are also connections to both the infinite-horizon learning approach to anticipated policy developed in Evans, Honkapohja, and Mitra (2009) and the eductive stability framework in Evans, Guesnerie, and McGough (2015).

5This point is repeated in Section 6.4 of Cochrane (2015): “The main point: such models presume that the Fed induces instability in an otherwise stable economy, a non-credible off-equilibrium threat to hyperinflate the economy for all but one chosen equilibrium.”

6And the risk of sinking into deflation clearly has been a major concern for policymakers in the US, during and following both the 2001 recession and the 2007 - 2009 recession. It has remained a concern in Europe and Japan as well as in Japan during the 1990s.

7Experimnetal work with stylized NK economies has found that entering deflation traps is a real possibility. See Hommes and Salle (2015).

8See also Evans (2013) for a partial and less general version of this argument.

References

Benhabib, J., G. W. Evans, and S. Honkapohja (2014): “Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?,” Journal of Economic Dynamics and Control, 45, 220—238.

Benhabib, J., S. Schmitt-Grohe, and M. Uribe (2001): “The Perils of Taylor Rules,” Journal of Economic Theory, 96, 40—69.

Bray, M., and N. Savin (1986): “Rational Expectations Equilibria, Learning, and Model Specification,” Econometrica, 54, 1129—1160.

Bullard, J. (2010): “Seven Faces of The Peril,” Federal Reserve Bank of St. Louis Review, 92, 339—352.

Cochrane, J. H. (2009): “Can Learnability Save New Keynesian Models?,” Journal of Monetary Economics, 56, 1109—1113.

_______ (2015): “Do Higher Interest Rates Raise or Lower Inflation?, "Working paper, University of Chicago Booth School of Business.

Dixon, H., and N. Rankin (eds.) (1995): The New Macroeconomics: Imperfect Markets and Policy Effectiveness. Cambridge University Press, Cambridge UK.

Eusepi, S., and B. Preston (2010): “Central Bank Communication and Expectations Stabilization,” American Economic Journal: Macroeconomics, 2, 235—271.

Evans, G.W. (2013): “The Stagnation Regime of the New KeynesianModel and Recent US Policy,” in Sargent and Vilmunen (2013), chap. 4.

Evans, G. W., R. Guesnerie, and B. McGough (2015): “Eductive Stability in Real Business Cycle Models,” mimeo.

Evans, G. W., E. Guse, and S. Honkapohja (2008): “Liquidity Traps, Learning and Stagnation,” European Economic Review, 52, 1438—1463.

Evans, G. W., and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.

_______ (2003): “Expectations and the Stability Problem for Optimal Monetary Policies,” Review of Economic Studies, 70, 807—824.

_______ (2006): “Monetary Policy, Expectations and Commitment,” Scandinavian Journal of Economics, 108, 15—38.

Evans, G. W., S. Honkapohja, and K. Mitra (2009): “Anticipated Fiscal Policy and Learning,” Journal of Monetary Economics, 56, 930— 953

_______ (2015): “Expectations, Stagnation and Fiscal Policy,” Working paper, University of Oregon.

Evans, G. W., and B. McGough (2015a): “Learning to Optimize,” mimeo, University of Oregon.

_______ (2015b): “Observability and Equilibrium Selection,” mimeo, University of Oregon.

Evans, G. W., and G. Ramey (1992): “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review, 82, 207—224.

_______ (1995): “Expectation Calculation, Hyperinflation and Currency Collapse,” in Dixon and Rankin (1995), chap. 15, pp. 307—336.

_______ (1998): “Calculation, Adaptation and Rational Expectations,” Macroeconomic Dynamics, 2, 156—182.

Garcia-Schmidt, M., and M. Woodford (2015): “Are Low Interest Rates Deflationary? A Paradox of Perfect Foresight Analysis,” Working paper, Columbia University.

Hicks, J. R. (1946): Value and Capital, Second edition. Oxford University Press, Oxford UK.

Hommes, Cars H., M. D., and I. Salle (2015): “Monetary and Fiscal Policy Design at the Zero Lower Bound: Evidence from the lab,” mimeo., CeNDEF, University of Amsterdam.

Howitt, P. (1992): “Interest Rate Control and Nonconvergence to Rational Expectations,” Journal of Political Economy, 100, 776—800.

Lucas, Jr., R. E. (1986): “Adaptive Behavior and Economic Theory,” Journal of Business, Supplement, 59, S401—S426.

Marcet, A., and T. J. Sargent (1989): “Convergence of Least-Squares Learning Mechanisms in Self-Referential Linear Stochastic Models,” Journal of Economic Theory, 48, 337—368.

McCallum, B. T. (2009): “Inflation Determination with Taylor Rules: Is New-Keynesian Analysis Critically Flawed?,” Journal of Monetary Economic Dynamics, 56, 1101—1108.

Preston, B. (2005): “Learning about Monetary Policy Rules when Long- Horizon Expectations Matter,” International Journal of Central Banking, 1, 81—126.

Sargent, T. J., and J. Vilmunen (eds.) (2013): Macroeconomics at the Service of Public Policy. Oxford University Press.

Monday, December 28, 2015

Striving for Balance in Economics: Towards a Theory of the Social Determination of Behavior

Karla Hoff and Joe Stiglitz:

Striving for Balance in Economics: Towards a Theory of the Social Determination of Behavior, by Karla Hoff, Joseph E. Stiglitz, NBER Working Paper No. 21823,Issued in December 2015: Abstract This paper is an attempt to broaden the standard economic discourse by importing insights into human behavior not just from psychology, but also from sociology and anthropology. Whereas the concept of the decision-maker is the rational actor in standard economics and, in early work in behavioral economics, the quasi-rational actor influenced by the context of the moment of decision-making, in some recent work in behavioral economics the decision-maker could be called the enculturated actor. This actor's preferences and cognition are subject to two deep social influences: (a) the social contexts to which he has become exposed and, especially accustomed; and (b) the cultural mental models—including categories, identities, narratives, and worldviews—that he uses to process information. We trace how these factors shape individual behavior through the endogenous determination of both preferences and the lenses through which individuals see the world—their perception, categorization, and interpretation of situations. We offer a tentative taxonomy of the social determinants of behavior and describe results of controlled and natural experiments that only a broader view of the social determinants of behavior can plausibly explain. The perspective suggests new tools to promote well-being and economic development. [Open Link]

'ANT-style critique of ABM'

Daniel Little:

A short recent article in the Journal of Artificial Societies and Social Simulation by Venturini, Jensen, and Latour lays out a critique of the explanatory strategy associated with agent-based modeling of complex social phenomena (link). (Thanks to Mark Carrigan for the reference via Twitter; @mark_carrigan.) Tommaso Venturini is an expert on digital media networks at Sciences Po (link), Pablo Jensen is a physicist who works on social simulations, and Bruno Latour is -- Bruno Latour. Readers who recall recent posts here on the strengths and weaknesses of ABM models as a basis for explaining social conflict will find the article interesting (link). VJ&L argue that agent-based models -- really, all simulations that proceed from the micro to the macro -- are both flawed and unnecessary. They are flawed because they unavoidable resort to assumptions about agents and their environments that reduce the complexity of social interaction to an unacceptable denominator; and they are unnecessary because it is now possible to trace directly the kinds of processes of social interaction that simulations are designed to model. The "big data" available concerning individual-to-individual interactions permits direct observation of most large social processes, they appear to hold.

Here are the key criticisms of ABM methodology that the authors advance:

  • Most of them, however, partake of the same conceptual approach in which individuals are taken as discrete and interchangeable 'social atoms' (Buchanan 2007) out of which social structures emerge as macroscopic characteristics (viscosity, solidity...) emerge from atomic interactions in statistical physics (Bandini et al. 2009). (1.2)
  • most simulations work only at the price of simplifying the properties of micro-agents, the rules of interaction and the nature of macro-structures so that they conveniently fit each other. (1.4)
  • micro-macro models assume by construction that agents at the local level are incapable to understand and control the phenomena at the global level. (1.5)

And here is their key claim:

  • Empirical studies show that, contrarily to what most social simulations assume, collective action does not originate at the micro level of individual atoms and does not end up in a macro level of stable structures. Instead, actions distribute in intricate and heterogeneous networks than fold and deploy creating differences but not discontinuities. (1.11) 

This final statement could serve as a high-level paraphrase of actor-network theory, as presented by Latour in Reassembling the Social: An Introduction to Actor-Network-Theory. (Here is a brief description of actor-network theory and its minimalist social ontology; link.)

These criticisms parallel some of my own misgivings about simulation models, though I am somewhat more sympathetic to their use than VJ&L. Here are some of the concerns raised in earlier posts about the validity of various ABM approaches to social conflict (linklink):

  • Simulations often produce results that appear to be artifacts rather than genuine social tendencies.
  • Simulations leave out important features of the social world that are prima facie important to outcomes: for example, quality of leadership, quality and intensity of organization, content of appeals, differential pathways of appeals, and variety of political psychologies across agents.
  • The factor of the influence of organizations is particularly important and non-local.
  • Simulations need to incorporate actors at a range of levels, from individual to club to organization.

And here is the conclusion I drew in that post:

  • But it is very important to recognize the limitations of these models as predictors of outcomes in specific periods and locations of unrest. These simulation models probably don't shed much light on particular episodes of contention in Egypt or Tunisia during the Arab Spring. The "qualitative" theories of contention that have been developed probably shed more light on the dynamics of contention than the simulations do at this point in their development.

But the confidence expressed by VJ&L in the new observability of social processes through digital tracing seems excessive to me. They offer a few good examples that support their case -- opinion change, for example (1.9). Here they argue that it is possible to map or track opinion change directly through digital footprints of interaction (Twitter, Facebook, blogging), and this is superior to abstract modeling of opinion change through social networks. No doubt we can learn something important about the dynamics of opinion change through this means.

But this is a very special case. Can we similarly "map" the spread of new political ideas and slogans during the Arab Spring? No, because the vast majority of those present in Tahrir Square were not tweeting and texting their experiences. Can we map the spread of anti-Muslim attitudes in Gujarat in 2002 leading to massive killings of Muslims in a short period of time? No, for the same reason: activists and nationalist gangs did not do us the historical courtesy of posting their thought processes in their Twitter feeds either. Can we study the institutional realities of the fiscal system of the Indonesian state through its digital traces? No. Can we study the prevalence and causes of official corruption in China through digital traces? Again, no.

In other words, there is a huge methodological problem with the idea of digital traceability, deriving from the fact that most social activity leaves no digital traces. There are problem areas where the traces are more accessible and more indicative of the underlying social processes; but this is a far cry from the utopia of total social legibility that appears to underlie the viewpoint expressed here.

So I'm not persuaded that the tools of digital tracing provide the full alternative to social simulation that these authors assert. And this implies that social simulation tools remain an important component of the social scientist's toolbox.

Wednesday, December 16, 2015

'The Methodology of Empirical Macroeconomics'

Brad DeLong:

Must-Read: Kevin Hoover: The Methodology of Empirical Macroeconomics: The combination of representative-agent modeling and utility-based “microfoundations” was always a game of intellectual Three-Card Monte. Why do you ask? Why don’t we fund sociologists to investigate for what reasons–other than being almost guaranteed to produce conclusions ideologically-pleasing to some–it has flourished for a generation in spite of having no empirical support and no theoretical coherence?
Kevin Hoover: The Methodology of Empirical Macroeconomics: “Given what we know about representative-agent models…
…there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. ...

Saturday, November 28, 2015

'Demand, Supply, and Macroeconomic Models'

Paul Krugman on macroeconomic models:

Demand, Supply, and Macroeconomic Models: I’m supposed to do a presentation next week about “shifts in economic models,” which has me trying to systematize my thought about what the crisis and aftermath have and haven’t changed my understanding of macroeconomics. And it seems to me that there is an important theme here: it’s the supply side, stupid. ...

Friday, November 20, 2015

'Some Big Changes in Macroeconomic Thinking from Lawrence Summers'

Adam Posen:

Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.

The  full text explains these points in more detail (I left out one point on the measurement of productivity).

Friday, November 13, 2015

'When Economics Works and When it Doesn’t'

Part of an interview of Dani Rodrik:

Q. You give a couple of examples in the book of the way theoretical errors can lead to policy errors. The first example you give concerns the “efficient markets hypothesis”. What role did an overestimation of the scope and explanatory power of that hypothesis play in the run-up to the global financial crisis of 2007-08?
A. If we take as our central model one under which the efficient markets hypothesis is correct—and that’s a model where there are a number of critical assumptions: one is rationality (we rule out behavioural aspects like bandwagons, excessive optimism and so on); second, we rule out externalities and agency problems—there’s a natural tendency in the policy world to liberalise as many markets as possible and to make regulation as light as possible. In the run-up to the financial crisis, if you’d looked at the steady increase in house prices or the growth of the shadow banking system from the perspective of the efficient markets hypothesis, they wouldn’t have bothered you at all. You’d tell a story about how wonderful financial liberalisation and innovation are—so many people, who didn’t have access before to mortgages, were now able to afford houses; here was a supreme example of free markets providing social benefits.
But if you took the same [set of] facts, and applied the kind of models that people who had been looking at sovereign debt crises in emerging markets had been developing—boom and bust cycles, behavioural biases, agency problems, externalities, too-big-to-fail problems—if you applied those tools to the same facts, you’d get a very different kind of story. I wish we’d put greater weight on stories of the second kind rather than the first. We’d have been better off if we’d done so.

Tuesday, November 03, 2015

Summers: Advanced Economies are So Sick

Larry Summers:

Advanced economies are so sick we need a new way to think about them: ...Hysteresis Effects Blanchard Cerutti and I look at a sample of over 100 recessions from industrial countries over the last 50 years and examine their impact on long run output levels in an effort to understand what Blanchard and I had earlier called hysteresis effects. We find that in the vast majority of cases output never returns to previous trends. Indeed there appear to be more cases where recessions reduce the subsequent growth of output than where output returns to trend. In other words “super hysteresis” to use Larry Ball’s term is more frequent than “no hysteresis.” ...
In subsequent work Antonio Fatas and I have looked at the impact of fiscal policy surprises on long run output and long run output forecasts using a methodology pioneered by Blanchard and Leigh. ... We find that fiscal policy changes have large continuing effects on levels of output suggesting the importance of hysteresis. ...
Towards a New Macroeconomics My separate comments in the volume develop an idea I have pushed with little success for a long time. Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent this is true of the DSGE (dynamic stochastic general equilibrium) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations not on the level of output. This assumption is problematic...
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics.

Friday, October 30, 2015

'The Missing Lowflation Revolution'

Antonio Fatás:

The missing lowflation revolution: It will soon be eight years since the US Federal Reserve decided to bring its interest rate down to 0%. Other central banks have spent similar number of years (or much longer in the case of Japan) stuck at the zero lower bound. In these eight years central banks have used all their available tools to increase inflation closer to their target and boost growth with limited success. GDP growth has been weak or anemic, and there is very little hope that economies will ever go back to their pre-crisis trends.
Some of these trends have challenged the traditional view of academic economists and policy makers about how an economy works. ...
My own sense is that the view among academics and policy makers is not changing fast enough and some are just assuming that this would be a one-time event that will not be repeated in the future (even if we are still not out of the current event!).
The comparison with the 70s when stagflation produced a large change in the way academic and policy makers thought about their models and about the framework for monetary policy is striking. During those year a high inflation and low growth environment created a revolution among academics (moving away from the simple Phillips Curve) and policy makers (switching to anti-inflationary and independent central banks). How many more years of zero interest rate will it take to witness a similar change in our economic analysis?

Wednesday, October 14, 2015

'In Search of the Science in Economics'

In case this is something you want to discuss (if not, that's okay too -- I got tired of this debate long, long ago):

In Search of the Science in Economics, by Noah Smith: ...I’d like to discuss the idea that economics is only a social science, and should discard its mathematical pretensions and return to a more literary approach. 
First, let’s talk about the idea that when you put the word “social” in front of “science,” everything changes. The idea here is that you can’t discover hard-and-fast principles that govern human behavior, or the actions of societies, the way physicists derive laws of motion for particles or biologists identify the actions of the body’s various systems. You hear people say this all the time
But is it true? As far as I can tell, it’s just an assertion, with little to back it up. No one has discovered a law of the universe that you can’t discover patterns in human societies. Sure, it’s going to be hard -- a human being is vastly more complicated than an electron. But there is no obvious reason why the task is hopeless. 
To the contrary, there have already been a great many successes. ...
What about math? .... I do think economists would often benefit from closer observation of the real world. ... But that doesn’t mean math needs to go. Math allows quantitative measurement and prediction, which literary treatises do not. ...
So yes, social science can be science. There will always be a place in the world for people who walk around penning long, literary tomes full of vague ideas about how humans and societies function. But thanks to quantitative social science, we now have additional tools at our disposal. Those tools have already improved our world, and to throw them away would be a big mistake.

This is from a post of mine in August, 2009 on the use of mathematics in economics:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless. So we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.

But we have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. ... Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices. ...

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values and that markets will self-insure, caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

Monday, October 05, 2015

'Is Economics Research Replicable? Usually Not.'

Andrew Chang and Phillip Li:

Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say “Usually Not”, by Andrew C. Chang and Phillip Li, Finance and Economics Discussion Series 2015-083. Washington: Board of Governors of the Federal Reserve System: Abstract We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files. Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.

Saturday, September 26, 2015

''A Few Less Obvious Answers'' on What is Wrong with Macroeconomics

From an interview with Olivier Blanchard:

...IMF Survey: In pushing the envelope, you also hosted three major Rethinking Macroeconomics conferences. What were the key insights and what are the key concerns on the macroeconomic front? 
Blanchard: Let me start with the obvious answer: That mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.
But let me give you a few less obvious answers:
The financial crisis raises a potentially existential crisis for macroeconomics. Practical macro is based on the assumption that there are fairly stable aggregate relations, so we do not need to keep track of each individual, firm, or financial institution—that we do not need to understand the details of the micro plumbing. We have learned that the plumbing, especially the financial plumbing, matters: the same aggregates can hide serious macro problems. How do we do macro then?
As a result of the crisis, a hundred intellectual flowers are blooming. Some are very old flowers: Hyman Minsky’s financial instability hypothesis. Kaldorian models of growth and inequality. Some propositions that would have been considered anathema in the past are being proposed by "serious" economists: For example, monetary financing of the fiscal deficit. Some fundamental assumptions are being challenged, for example the clean separation between cycles and trends: Hysteresis is making a comeback. Some of the econometric tools, based on a vision of the world as being stationary around a trend, are being challenged. This is all for the best.
Finally, there is a clear swing of the pendulum away from markets towards government intervention, be it macro prudential tools, capital controls, etc. Most macroeconomists are now solidly in a second best world. But this shift is happening with a twist—that is, with much skepticism about the efficiency of government intervention. ...

Sunday, September 13, 2015

'Botox for Development'

Paul Romer:

Botox for Development: In a talk at the World Bank that I gave last week, I repeated a riff that I’ve used before. Suppose your internist told you:

The x-ray shows a mass that is probably cancer, but we don’t have any good randomized clinical trials showing that your surgeon’s recommendation, operating to remove it, actually causes the remission that tends to follow. However, we do have an extremely clever clinical trial showing conclusively that Botox will make you look younger. So my recommendation is that you wait for some better studies before doing anything about the tumor but that I give you some Botox injections.”

If it were me, I’d get a new internist.

To be sure, researchers would always prefer data from randomized treatments... Unfortunately, randomization is not free. It is available at low or moderate cost for some treatments and at a prohibitively high cost for other potentially important treatments. ...

 I work on high expected-return policies that can be implemented, with no concern about whether I will be able to publish the results from this work in the standard economics journals....

I have the good fortune of knowing that I can be a successful academic even if the journals will not publish results from the work I do. I realize that many other economists do not have this freedom. I understand that they have to respond to the incentives they face, and that the publication process biases their work in the direction of policies that are more like Botox than surgery.

But we can all work to change the existing equilibrium. It is good that economists pay careful attention to identification and causality. This inclination will be even more important as new sources of “big” non-experimental data become available. But it is not the only good thing we can do. We have to weigh the tradeoffs we face between getting precise answers about such policies as setting up women’s self-help groups (the example that Lant Pritchett uses as his illustration of what I am calling Botox for economic development) versus such other policies as facilitating urbanization or migration that offer returns that are uncertain but have an expected value that is larger by many orders of magnitude.

If economists can’t understand the tradeoff between risk and expected return, who can?

Saturday, September 05, 2015

'Range of Reactions to Realism about the Social World'

Daniel Little:

Range of reactions to realism about the social world: My recent post on realism in the social realm generated quite a bit of commentary, which I'd like to address here.

Brad Delong offered an incredulous response -- he seems to think that any form of scientific realism is ridiculous (link). He refers to the predictive success of Ptolemy's epicycles, and then says, "But just because your theory is good does not mean that the entities in your theory are "really there", whatever that might mean...." I responded on Twitter: "Delong doesn't like scientific realism -- really? Electrons, photons, curvature of space - all convenient fictions?" The position of instrumentalism is intellectually untenable, in my opinion -- the idea that scientific theories are just convenient computational devices for summarizing a range of observations. It is hard to see why we would have confidence in any complex technology depending on electricity, light, gravity, the properties of metals and semiconductors, if we didn't think that our scientific theories of these things were approximately true of real things in the world. So general rejection of scientific realism seems irrational to me. But the whole point of the post was that this reasoning doesn't extend over to the social sciences very easily; if we are to be realists about social entities, it needs to be on a different basis than the overall success of theories like Keynsianism, Marxism, or Parsonian sociology. They just aren't that successful!

There were quite a few comments (71) when Mark Thoma reposted this piece on economistsview. A number of the commentators were particularly interested in the question of the realism of economic knowledge. Daniel Hausman addresses the question of realism in economics in his article on the philosophy of economics in the Stanford Encyclopedia of Philosophy (link):

Economic methodologists have paid little attention to debates within philosophy of science between realists and anti-realists (van Fraassen 1980, Boyd 1984), because economic theories rarely postulate the existence of unobservable entities or properties, apart from variants of “everyday unobservables,” such as beliefs and desires. Methodologists have, on the other hand, vigorously debated the goals of economics, but those who argue that the ultimate goals are predictive (such as Milton Friedman) do so because of their interest in policy, not because they seek to avoid or resolve epistemological and semantic puzzles concerning references to unobservables.

Examples of economic concepts that commentators seemed to think could be interpreted realistically include concepts such as "economic disparity".  But this isn't a particularly arcane or unobservable theoretical concept. There is a lot of back-and-forth on the meaning of investment in Keynes's theory -- is it a well-defined concept? Is it a concept that can be understood realistically? The question of whether economics consists of a body of theory that might be interpreted realistically is a complicated one. Many technical economic concepts seem not to be referential; instead, they seem to be abstract concepts summarizing the results of large numbers of interactions by economic agents.

The most famous discussion of realism in economics is that offered by Milton Friedman in relation to the idea of economic rationality (Essays in Positive Economics); he doubts that economists need to assume that real economic actors do so on the basis of economic rationality. Rather, according to Friedman this is just a simplifying assumption to allow us to summarize a vast range of behavior. This is a hard position to accept, though; if agents are not making calculating choices about costs and benefits, then why should we expect a market to work in the ways our theories say it should? (Here is a good critique by Bruce Caldwell of Friedman's instrumentalism; link.)

And what about the concept of a market itself? Can we understand this concept realistically? Do markets really exist? Maybe the most we can say is something like this: there are many social settings where stuff is produced and exchanged. When exchange is solely or primarily governed by the individual self-interest of the buyers and sellers, we can say that a market exists. But we must also be careful to add that there are many different institutional and social settings where this condition is satisfied, so there is great variation across the particular "market settings" of different societies and communities. As a result, we need to be careful not to reify the concept of a market across all settings.

Michiel van Ingen made a different sort of point about my observations about social realism in his comment offered on Facebook. He thinks I am too easy on the natural sciences.

This piece strikes me as problematic. First, because physics is by no means as successful at prediction as it seems to suggest. A lot of physics is explanatorily quite powerful, but - like any other scientific discipline - can only predict in systemically closed systems. Contrasting physics with sociology and political science because the latter 'do not consist of unified deductive systems whose empirical success depends upon a derivation of distant observational consequences' is therefore unnecessarily dualistic. In addition, I'm not sure why the 'inference to the best explanation' element should be tied to predictive success as closely as it is in this piece. Inference to the best explanation is, by its very definition, perfectly applicable to EXPLANATION. And this applies across the sciences, whether 'natural' or 'social', though of course there is a significant difference between those sciences in which experimentation is plausible and helpful, and those in which it is not. This is not, by the way, the same as saying that natural sciences are experimental and social ones aren't. There are plenty of natural sciences which are largely non-experimental as well. And lest we forget, the hypothetico-deductive form of explanation DOES NOT WORK IN THE NATURAL SCIENCES EITHER!

This critique comes from the general idea that the natural sciences need a bit of debunking, in that various areas of natural science fail to live up to the positivist ideal of a precise predictive system of laws. That is fair enough; there are areas of imprecision and uncertainty in the natural sciences. But, as I responded to Delong above, the fact remains that we have a very good understanding of much of the physical realities and mechanisms that generate the phenomena we live with. Here is the response I offered Michiel:

Thank you, Michiel, for responding so thoughtfully. Your comments and qualifications about the natural sciences are correct, of course, in a number of ways. But really, I think we post-positivists need to recognize that the core areas of fundamental and classical physics, electromagnetic theory, gravitation theory, and chemistry including molecular biology, are remarkably successful in unifying, predicting, and explaining the phenomena within these domains. They are successful because extensive and mathematicized theories have been developed and extended, empirically tested, refined, and deployed to help account for new phenomena. And these theories, as big chunks, make assertions about the way nature works. This is where realism comes in: the chunks of theories about the nature of the atom, electromagnetic forces, gravitation, etc., can be understood to be approximately true of nature because otherwise we would have no way to account for the remarkable ability of these theories to handle new phenomena.

So I haven't been persuaded to change my mind about social realism as a result of these various comments. The grounds for realism about social processes, structures, and powers are different for many social sciences than for many natural sciences. We can probe quite a bit of the social world through mid-level and piecemeal research methods -- which means that we can learn quite a bit about the nature of the social world through these methods. Here is the key finding:

So it seems that we can justify being realists about class, field, habitus, market, coalition, ideology, organization, value system, ethnic identity, institution, and charisma, without relying at all on the hypothetico-deductive model of scientific knowledge upon which the "inference to the best explanation" argument depends. We can look at sociology and political science as loose ensembles of empirically informed theories and models of meso-level social processes and mechanisms, each of which is to a large degree independently verifiable. And this implies that social realism should be focused on mid-level social mechanisms and processes that can be identified in the domains of social phenomena that we have studied rather than sweeping concepts of social structures and entities.

(Sometimes social media debates give the impression of a nineteenth-century parliamentary shouting match -- which is why the Daumier drawing came to mind!)

Thursday, August 27, 2015

'The Day Macroeconomics Changed'

Simon Wren-Lewis:

The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.

Wednesday, August 26, 2015

Ray Fair: The Future of Macro

Ray Fair:

The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...

He goes on to explain why. He concludes with:

... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.

Sunday, August 23, 2015

''Young Economists Feel They Have to be Very Cautious''

From an interview of Paul Romer in the WSJ:

...Q: What kind of feedback have you received from colleagues in the profession?

A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”

Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?

A: There was an interesting [2013] Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...

It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...

Friday, August 21, 2015

'Scientists Do Not Demonize Dissenters. Nor Do They Worship Heroes.'

Paul Romer's latest entry on "mathiness" in economics ends with:

Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.

[The reference to Joan Robinson is clarified in the full text.]

Monday, August 17, 2015

Stiglitz: Towards a General Theory of Deep Downturns

This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):

Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation.[2] As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained.[3] These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.[4]
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy.[5] It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...

He concludes with:

IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets.[66] Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.

Tuesday, August 11, 2015

Macroeconomics: The Roads Not Yet Taken

My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:

Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...

The Macroeconomic Divide

Paul Krugman:

Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...