Category Archive for: Methodology [Return to Main]

Tuesday, August 19, 2014

The Agent-Based Method

Rajiv Sethi:

The Agent-Based Method: It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.

So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 

Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 

---

The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required. 

Wednesday, August 13, 2014

'Unemployment Fluctuations are Mainly Driven by Aggregate Demand Shocks'

Do the facts have a Keynesian bias?:

Using product- and labour-market tightness to understand unemployment, by Pascal Michaillat and Emmanuel Saez, Vox EU: For the five years from December 2008 to November 2013, the US unemployment rate remained above 7%, peaking at 10% in October 2009. This period of high unemployment is not well understood. Macroeconomists have proposed a number of explanations for the extent and persistence of unemployment during the period, including:

  • High mismatch caused by major shocks to the financial and housing sectors,
  • Low search effort from unemployed workers triggered by long extensions of unemployment insurance benefits, and
  • Low aggregate demand caused by a sudden need to repay debts or pessimism, but no consensus has been reached.

In our opinion this lack of consensus is due to a gap in macroeconomic theory: we do not have a model that is rich enough to account for the many factors driving unemployment – including aggregate demand – and simple enough to lend itself to pencil-and-paper analysis. ...

In Michaillat and Saez (2014), we develop a new model of unemployment fluctuations to inspect the mechanisms behind unemployment fluctuations. The model can be seen as an equilibrium version of the Barro-Grossman model. It retains the architecture of the Barro-Grossman model but replaces the disequilibrium framework on the product and labour markets with an equilibrium matching framework. ...

Through the lens of our simple model, the empirical evidence suggests that price and real wage are somewhat rigid, and that unemployment fluctuations are mainly driven by aggregate demand shocks.

Tuesday, August 12, 2014

Why Do Macroeconomists Disagree?

I have a new column:

Why Do Macroeconomists Disagree?, by Mark Thoma, The Fiscal Times: On August 9, 2007, the French Bank BNP Paribus halted redemptions to three investment funds active in US mortgage markets due to severe liquidity problems, an event that many mark as the beginning of the financial crisis. Now, just over seven years later, economists still can’t agree on what caused the crisis, why it was so severe, and why the recovery has been so slow. We can’t even agree on the extent to which modern macroeconomic models failed, or if they failed at all.
The lack of a consensus within the profession on the economics of the Great Recession, one of the most significant economic events in recent memory, provides a window into the state of macroeconomics as a science. ...

Monday, August 11, 2014

'Inflation in the Great Recession and New Keynesian Models'

From the NY Fed's Liberty Street Economics:

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc Giannoni, Raiden Hasegawa, and Frank Schorfheide: Since the financial crisis of 2007-08 and the Great Recession, many commentators have been baffled by the “missing deflation” in the face of a large and persistent amount of slack in the economy. Some prominent academics have argued that existing models cannot properly account for the evolution of inflation during and following the crisis. For example, in his American Economic Association presidential address, Robert E. Hall called for a fundamental reconsideration of Phillips curve models and their modern incarnation—so-called dynamic stochastic general equilibrium (DSGE) models—in which inflation depends on a measure of slack in economic activity. The argument is that such theories should have predicted more and more disinflation as long as the unemployment rate remained above a natural rate of, say, 6 percent. Since inflation declined somewhat in 2009, and then remained positive, Hall concludes that such theories based on a concept of slack must be wrong.        
In an NBER working paper and a New York Fed staff report (forthcoming in the American Economic Journal: Macroeconomics), we use a standard New Keynesian DSGE model with financial frictions to explain the behavior of output and inflation since the crisis. This model was estimated using data up to 2008. We find that following the increase in financial stress in 2008, the model successfully predicts not only the sharp contraction in economic activity, but also only a modest decline in inflation. ...

Wednesday, July 23, 2014

'Wall Street Skips Economics Class'

The discussion continues:

Wall Street Skips Economics Class, by Noah Smith: If you care at all about what academic macroeconomists are cooking up (or if you do any macro investing), you might want to check out the latest economics blog discussion about the big change that happened in the late '70s and early '80s. Here’s a post by the University of Chicago economist John Cochrane, and here’s one by Oxford’s Simon Wren-Lewis that includes links to most of the other contributions.
In case you don’t know the background, here’s the short version...

Saturday, July 19, 2014

'Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?'

Brad DeLong:

Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?: I have always understood expected-utility decision theory to be normative, not positive: it is how people ought to behave if they want to achieve their goals in risky environments, not how people do behave. One of the chief purposes of teaching expected-utility decision theory is in fact to make people aware that they really should be risk neutral over small gambles where they do know the probabilities--that they will be happier and achieve more of their goals in the long run if they in fact do so. ...[continue]...

Here's the bottom line:

(6) Given that people aren't rational Bayesian expected utility-theory decision makers, what do economists think that they are doing modeling markets as if they are populated by agents who are? Here there are, I think, three answers:

  • Most economists are clueless, and have not thought about these issues at all.

  • Some economists think that we have developed cognitive institutions and routines in organizations that make organizations expected-utility-theory decision makers even though the individuals in utility theory are not. (Yeah, right: I find this very amusing too.)

  • Some economists admit that the failure of individuals to follow expected-utility decision theory and our inability to build institutions that properly compensate for our cognitive biases (cough, actively-managed mutual funds, anyone?) are one of the major sources of market failure in the world today--for one thing, they blow the efficient market hypothesis in finance sky-high.

The fact that so few economists are in the third camp--and that any economists are in the second camp--makes me agree 100% with Andrew Gelman's strictures on economics as akin to Ptolemaic astronomy, in which the fundamentals of the model are "not [first-order] approximations to something real, they’re just fictions..."

Monday, July 14, 2014

Is There a Phillips Curve? If So, Which One?

One place that Paul Krugman and Chris House disagree is on the Phillips curve. Krugman (responding to a post by House) says:

New Keynesians do stuff like one-period-ahead price setting or Calvo pricing, in which prices are revised randomly. Practicing Keynesians have tended to rely on “accelerationist” Phillips curves in which unemployment determined the rate of change rather than the level of inflation.
So what has happened since 2008 is that both of these approaches have been found wanting: inflation has dropped, but stayed positive despite high unemployment. What the data actually look like is an old-fashioned non-expectations Phillips curve. And there are a couple of popular stories about why: downward wage rigidity even in the long run, anchored expectations.

House responds:

What the data actually look like is an old-fashioned non-expectations Phillips curve. 
OK, here is where we disagree. Certainly this is not true for the data overall. It seems like Paul is thinking that the system governing the relationship between inflation and output changes between something with essentially a vertical slope (a “Classical Phillips curve”) and a nearly flat slope (a “Keynesian Phillips Curve”). I doubt that this will fit the data particularly well and it would still seem to open the door to a large role for “supply shocks” – shocks that neither Paul nor I think play a big role in business cycles.

Simon Wren-Lewis also has something to say about this in his post from earlier today, Has the Great Recession killed the traditional Phillips Curve?:

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.
After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.
Which better describes the data? ...
[W]e can see why some ... studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.

I don't have much to add, except to say that this is an empirical question that will be difficult to resolve empirically (because there are so many different ways to estimate a Phillips curve, and different specifications give different answers, e.g. which measure of prices to use, which measure of aggregate activity to use, what time period to use and how to handle structural and policy breaks during the period that is chosen, how should natural rates be extracted from the data, how to handle non-stationarities, if we measure aggregate activity with the unemployment rate, do we exclude the long-term unemployed as recent research suggests, how many lags should be included, etc., etc.?).

Sunday, July 13, 2014

New Classical Economics as Modeling Strategy

Judy Klein emails a response to a recent post of mine based upon Simon Wren Lewis's post “Rereading Lucas and Sargent 1979”:

Lucas and Sargent’s, “After Keynesian Macroeconomics,” was presented at the 1978 Boston Federal Reserve Conference on “After the Phillips Curve: Persistence of High Inflation and High Unemployment.” Although the title of the conference dealt with stagflation, the rational expectations theorists saw themselves countering one technical revolution with another.

The Keynesian Revolution was, in the form in which it succeeded in the United States, a revolution in method. This was not Keynes’s intent, nor is it the view of all of his most eminent followers. Yet if one does not view the revolution in this way, it is impossible to account for some of its most important features: the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy. [Lucas and Sargent, 1979, pg. 50]

The Lucas papers at the Economists' Papers Project at the University of Duke reveal the preliminary planning for the 1978 presentation. Lucas and Sargent decided that it would be a “rhetorical piece… to convince others that the old-fashioned macro game is up…in a way which makes it clear that the difficulties are fatal”; it’s theme would be the “death of macroeconomics” and the desirability of replacing it with an “Aggregative Economics” whose foundation was “equilibrium theory.” (Lucas letter to Sargent February 9, 1978). Their 1978 presentation was replete, as their discussant Bob Solow pointed out, with the planned rhetorical barbs against Keynesian economics of “wildly incorrect," "fundamentally flawed," "wreckage," "failure," "fatal," "of no value," "dire implications," "failure on a grand scale," "spectacular recent failure," "no hope." The empirical backdrop to Lucas and Sargent’s death decree on Keynesian economics was evident in the subtitle of the conference: “Persistence of High Inflation and High Unemployment.”

Although they seized the opportunity to comment on policy failure and the high misery-index economy, Lucas and Sargent shifted the macroeconomic court of judgment from the economy to microeconomics. They fought a technical battle over the types of restrictions used by modelers to identify their structural models. Identification-rendering restrictions were essential to making both the Keynesian and rational expectations models “work” in policy applications, but Lucas and Sargent defined the ultimate terms of success not with regard to a model’s capacity for empirical explanation or achievement of a desirable policy outcome, but rather with regard to the model’s capacity to incorporate optimization and equilibrium – to aggregate consistently rational individuals and cleared markets.

In the macroeconomic history written by the victors, the Keynesian revolution and the rational expectations revolution were both technical revolutions, and one could delineate the sides of the battle line in the second revolution by the nature of the restricting assumptions that enabled the model identification that licensed policy prescription. The rational expectations revolution, however, was also a revolution in the prime referential framework for judging macroeconomic model fitness for going forth and multiplying; the consistency of the assumptions – the equation restrictions - with optimizing microeconomics and mathematical statistical theory, rather than end uses of explaining the economy and empirical statistics, constituted the new paramount selection criteria.

Some of the new classical macroeconomists have been explicit about the narrowness of their revolution. For example, Sargent noted in 2008, “While rational expectations is often thought of as a school of economic thought, it is better regarded as a ubiquitous modeling technique used widely throughout economics.” In an interview with Arjo Klamer in 1983, Robert Townsend asserted that “New classical economics means a modeling strategy.”

It is no coincidence, however, that in this modeling narrative of economic equilibrium crafted in the Cold War era, Adam Smith’s invisible hand morphs into a welfare-maximizing “hypothetical ‘benevolent social planner’” (Lucas, Prescott, Stokey 1989) enforcing a “communism of models” (Sargent 2007) and decreeing to individual agents the mutually consistent rules of action that become the equilibrating driving force. Indeed, a long-term Office of Naval Research grant for “Planning & Control of Industrial Operations” awarded to the Carnegie Institutes of Technology’s Graduate School of Industrial Administration had funded Herbert Simon’s articulation of his certainty equivalence theorem and John Muth’s study of rational expectations. It is ironic that a decade-long government planning contract employing Carnegie professors and graduate students underwrote the two key modeling strategies for the Nobel-prize winning demonstration that the rationality of consumers renders government intervention to increase employment unnecessary and harmful.

Friday, July 11, 2014

'Rereading Lucas and Sargent 1979'

Simon Wren-Lewis with a nice follow-up to an earlier discussion:

Rereading Lucas and Sargent 1979: Mainly for macroeconomists and those interested in macroeconomic thought
Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.
What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.
In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation...[continue]...

Friday, July 04, 2014

Responses to John Cochrane's Attack on New Keynesian Models

The opening quote from chapter 2 of Mankiw's intermediate macro textbook:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to fit facts. — Sherlock Holmes

Or, instead of "before one has data," change it to "It is a capital mistake to theorize without knowledge of the data" and it's a pretty good summary of Paul Krugman's response to John Cochrane:

Macro Debates and the Relevance of Intellectual History: One of the interesting things about the ongoing economic crisis is the way it has demonstrated the importance of historical knowledge. ... But it’s not just economic history that turns out to be extremely relevant; intellectual history — the history of economic thought — turns out to be relevant too.
Consider, in particular, the recent to-and-fro about stagflation and the rise of new classical macroeconomics. You might think that this is just economist navel-gazing; but you’d be wrong.
To see why, consider John Cochrane’s latest. ... Cochrane’s current argument ... effectively depends on the notion that there must have been very good reasons for the rejection of Keynesianism, and that harkening back to old ideas must involve some kind of intellectual regression. And that’s where it’s important — as David Glasner notes — to understand what really happened in the 70s.
The point is that the new classical revolution in macroeconomics was not a classic scientific revolution, in which an old theory failed crucial empirical tests and was supplanted by a new theory that did better. The rejection of Keynes was driven by a quest for purity, not an inability to explain the data — and when the new models clearly failed the test of experience, the new classicals just dug in deeper. They didn’t back down even when people like Chris Sims (pdf), using the very kinds of time-series methods they introduced, found that they strongly pointed to a demand-side model of economic fluctuations.
And critiques like Cochrane’s continue to show a curious lack of interest in evidence. ... In short, you have a much better sense of what’s really going on here, and which ideas remain relevant, if you know about the unhappy history of macroeconomic thought.

Nick Rowe:

Insufficient Demand vs?? Uncertainty: ...John Cochrane says: "John Taylor, Stanford's Nick Bloom and Chicago Booth's Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago's Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth." ...
Increased political uncertainty would reduce aggregate demand. Plus, positive feedback processes could amplify that initial reduction in aggregate demand. Even those who were not directly affected by that increased political uncertainty would reduce their own willingness to hire lend or invest because of that initial reduction in aggregate demand, plus their own uncertainty about aggregate demand. So the average person or firm might respond to a survey by saying that insufficient demand was the problem in their particular case, and not the political uncertainty which caused it.
But the demand-side problem could still be prevented by an appropriate monetary policy response. Sure, there would be supply-side effects too. And it would be very hard empirically to estimate the relative magnitudes of those demand-side vs supply-side effects. ...
So it's not just an either/or thing. Nor is it even a bit-of-one-plus-bit-of-the-other thing. Increased political uncertainty can cause a recession via its effect on demand. Unless monetary policy responds appropriately. (And that, of course, would mean targeting NGDP, because inflation targeting doesn't work when supply-side shocks cause adverse shifts in the Short Run Phillips Curve.)

On whether supply or demand shocks are the source of aggregate fluctuations, Blanchard and Quah (1989), Shapiro and Watson (1988), and others had it right (though the identifying restriction that aggregate demand shocks do not have permanent effects seems to be undermined by the Great Recession ). It's not an eithor/or question, it's a matter of figuring out how much of the variation in GDP/employment is due to supply shocks, and how much is due to demand shocks. And as Nick Rowe points out with his example, sorting between these two causes can be very difficult -- identifying which type of shock is driving changes in aggregate variables is not at all easy and depends upon particular assumptions. Nevertheless, my reading of the empirical evidence is much like Krugman's. Overall, across all these papers, it is demand shocks that play the most prominent role. Supply shocks do matter, but not nearly so much as demand shocks when it comes to explaining aggregate fluctuations.

Saturday, June 28, 2014

The Rise and Fall of the New Classical Model

Simon Wren-Lewis (my comments are at the end):

Understanding the New Classical revolution: In the account of the history of macroeconomic thought I gave here, the New Classical counter revolution was both methodological and ideological in nature. It was successful, I suggested, because too many economists were unhappy with the gulf between the methodology used in much of microeconomics, and the methodology of macroeconomics at the time.
There is a much simpler reading. Just as the original Keynesian revolution was caused by massive empirical failure (the Great Depression), the New Classical revolution was caused by the Keynesian failure of the 1970s: stagflation. An example of this reading is in this piece by the philosopher Alex Rosenberg (HT Diane Coyle). He writes: “Back then it was the New Classical macrotheory that gave the right answers and explained what the matter with the Keynesian models was.”
I just do not think that is right. Stagflation is very easily explained: you just need an ‘accelerationist’ Phillips curve (i.e. where the coefficient on expected inflation is one), plus a period in which monetary policymakers systematically underestimate the natural rate of unemployment. You do not need rational expectations, or any of the other innovations introduced by New Classical economists.
No doubt the inflation of the 1970s made the macroeconomic status quo unattractive. But I do not think the basic appeal of New Classical ideas lay in their better predictive ability. The attraction of rational expectations was not that it explained actual expectations data better than some form of adaptive scheme. Instead it just seemed more consistent with the general idea of rationality that economists used all the time. Ricardian Equivalence was not successful because the data revealed that tax cuts had no impact on consumption - in fact study after study have shown that tax cuts do have a significant impact on consumption.
Stagflation did not kill IS-LM. In fact, because empirical validity was so central to the methodology of macroeconomics at the time, it adapted to stagflation very quickly. This gave a boost to the policy of monetarism, but this used the same IS-LM framework. If you want to find the decisive event that led to New Classical economists winning their counterrevolution, it was the theoretical realisation that if expectations were rational, but inflation was described by an accelerationist Phillips curve with expectations about current inflation on the right hand side, then deviations from the natural rate had to be random. The fatal flaw in the Keynesian/Monetarist theory of the 1970s was theoretical rather than empirical.

I agree with this, so let me add to it by talking about what led to the end of the New Classical revolution (see here for a discussion of the properties of New Classical, New Keynesian, and Real Business Cycle Models). The biggest factor was empirical validity. Although some versions of the New Classical model allowed monetary non-neutrality (e.g. King 1982, JPE), when three factors are present, continuous market clearing, rational expectations, and the natural rate hypothesis, monetary neutrality is generally present in these models. Initially work from people like Barrow found strong support for the prediction of these models that only unanticipated changes in monetary policy can affect real variables like output, but subsequent work and eventually the weight of the evidence pointed in the other direction. Both expected and unexpected changes in the money supply appeared to matter in contrast to a key prediction of the New Classical framework.

A second factor that worked against New Classical models is that they had difficulty explaining both the duration and magnitude of actual business cycles. If the reaction to an unexpected policy shock was focused in a single period, the magnitude could be matched, but not the duration. If the shock was spread over 3-5 years to match the duration, the magnitude of cycles could not be matched. Movements in macroeconomic variables arising from informational errors (unexpected policy shocks) did not have enough "power" to capture both aspects of actual business cycles.

The other factor that worked against these models was that information problems were a key factor in generating swings in GDP and employment, and these variations were costly in aggregate. Yet no markets for information appeared to resolve this problem. For those who believe in the power of markets, and many proponents of New Classical models were also market fundamentalists, the lack of markets for information was a problem.

The New Classical model had displaced the Keynesian model for the reasons highlighted above, but the failure of the New Classical model left the door open for the New Keynesian model to emerge (it appeared to be more consistent with the empirical evidence on the effects of changes in the money supply, and in other areas as well, e.g. the correlation between productivity and economic activity).

But while the New Classical revolution was relatively short-lived as macro models go, it left two important legacies, rational expectations and microfoundations (as well as better knowledge about how non-neutralities might arise, in essence the New Keynesian model drops continuous market clearing through the assumption of short-run price rigidities, and about how to model information sets). Rightly or wrongly, all subsequent models had to have these two elements present within them (RE and microfoundaions), or they would be dismissed.

Thursday, June 26, 2014

Why DSGEs Crash During Crises

David Hendry and Grayham Mizon with an important point about DSGE models:

Why DSGEs crash during crises, by David F. Hendry and Grayham E. Mizon: Many central banks rely on dynamic stochastic general equilibrium models – known as DSGEs to cognoscenti. This column – which is more technical than most Vox columns – argues that the models’ mathematical basis fails when crises shift the underlying distributions of shocks. Specifically, the linchpin ‘law of iterated expectations’ fails, so economic analyses involving conditional expectations and inter-temporal derivations also fail. Like a fire station that automatically burns down whenever a big fire starts, DSGEs become unreliable when they are most needed.

Here's the introduction:

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.
Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.
The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models. ...[continue]...

Update: [nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances.

Tuesday, June 24, 2014

'Was the Neoclassical Synthesis Unstable?'

The last paragraph from a much longer argument by Simon Wren-Lewis:

Was the neoclassical synthesis unstable?: ... Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

Saturday, May 10, 2014

Replication in Economics

Thomas Kneib sent me the details of this project in early April after a discussion about it with one of his Ph.D. students (Jan Höffler) at the INET conference, and I've been meaning to post something on it but have been negligent. So I'm glad that Dave Giles picked up the slack:

Replication in Economics: I was pleased to receive an email today, alerting me to the "Replication in Economics" wiki at the University of Göttingen:

"My name is Jan H. Höffler, I have been working on a replication project funded by the Institute for New Economic Thinking during the last two years and found your blog that I find very interesting. I like very much that you link to data and code related to what you write about. I thought you might be interested in the following:

We developed a wiki website that serves as a database of empirical studies, the availability of replication material for them and of replication studies: http://replication.uni-goettingen.de

It can help for research as well as for teaching replication to students. We taught seminars at several faculties internationally - also in Canada, at UofT - for which the information of this database was used. In the starting phase the focus was on some leading journals in economics, and we now cover more than 1800 empirical studies and 142 replications. Replication results can be published as replication working papers of the University of Göttingen's Center for Statistics.

Teaching and providing access to information will raise awareness for the need for replications, provide a basis for research about the reasons why replications so often fail and how this can be changed, and educate future generations of economists about how to make research replicable.

I would be very grateful if you could take a look at our website, give us feedback, register and vote which studies should be replicated – votes are anonymous. If you could also help us to spread the message about this project, this would be most appreciated."

I'm more than happy to spread the word, Jan. I've requested an account, and I'll definitely be getting involved with your project. This look like a great venture!

Friday, May 09, 2014

Economists and Methodology

Simon Wren-Lewis:

Economists and methodology: ...very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. ...
This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is ... a feeling that the methodology being used is unproblematic, and therefore requires little discussion.
I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science..., but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does. ...
... The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.
Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. ... Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. ...
Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use...

Monday, May 05, 2014

'Refocusing Economics Education'

Antonio Fatás (each of the four points below are explained in detail in the original post):

Refocusing economics education: Via Mark Thoma I read an interesting article about how the mainstream economics curriculum needs to be revamped (Wren-Lewis also has some nice thoughts on this issue).

I am sympathetic to some of the arguments made in those posts and the need for some serious rethinking of the way economics is taught but I would put the emphasis on slightly different arguments. First, I  am not sure the recent global crisis should be the main reason to change the economics curriculum. Yes, economists failed to predict many aspects of the crisis but my view is that it was not because of the lack of tools or understanding. We have enough models in economics that explain most of the phenomena that caused and propagated the global financial crisis. There are plenty of models where individuals are not rational, where financial markets are driven by bubbles, with multiple equilbria,... that one can use to understand the last decade. We do have all these tools but as economics teachers (and researchers) we need to choose which ones to focus on. And here is where we failed. And we did it before and during the crisis but we also did it earlier. Why aren't we focusing on the right models or methodology? Here is my list of mistakes we do in our teaching, which might also reflect on our research:

#1 Too much theory, not enough emphasis on explaining empirical phenomena. ...

#2 Too many counterintuitive results. Economists like to teach things that are surprising. ...

#3 The need for a unified theory. ...

#4 We teach what our audience wants to hear. ...

I also believe the sociology within the profession needs to change.

Thursday, March 27, 2014

'The Misuse of Theoretical Models in Finance and Economics'

 Stanford University's Paul Pfleiderer:

Chameleons: The Misuse of Theoretical Models in Finance and Economics, by Paul Pfleiderer, March 2014: Abstract In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy. I discuss how chameleons are created and nurtured by the mistaken notion that one should not judge a model by its assumptions, by the unfounded argument that models should have equal standing until definitive empirical tests are conducted, and by misplaced appeals to “as-if” arguments, mathematical elegance, subtlety, references to assumptions that are “standard in the literature,” and the need for tractability.

Sunday, March 23, 2014

On Greg Mankiw's 'Do No Harm'

A rebuttal to Greg Mankiw's claim that the government should not interfere in voluntary exchanges. This is from Rakesh Vohra at Theory of the Leisure Class:

Do No Harm & Minimum Wage: In the March 23rd edition of the NY Times Mankiw proposes a 'do no harm' test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test. ...

There is an immediate 'heart strings' argument against the test, because indentured servitude passes the 'do no harm' test. ... I want to focus instead on two other aspects of the 'do no harm' principle contained in the words 'voluntarily'and 'benefit'. What is voluntary and benefit compared to what? ...

When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this...

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.

[There is also an example based upon differences in patience that I left out.]

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

'The Counter-Factual & the Fed’s QE'

I tried to make this point in a recent column (it was about fiscal rather than monetary policy, but the same point applies), but I think Barry Ritholtz makes the point better and more succinctly:

Understanding Why You Think QE Didn't Work, by Barry Ritholtz: Maybe you have heard a line that goes something like this: The weak recovery is proof that the Federal Reserve’s program of asset purchases, otherwise known as quantitative easement, doesn't work.
If you were the one saying those words, you don't understand the counterfactual. ...
This flawed analytical paradigm has many manifestations, and not just in the investing world. They all rely on the same equation: If you do X, and there is no measurable change, X is therefore ineffective.
The problem with this “non-result result” is what would have occurred otherwise. Might “no change” be an improvement from what otherwise would have happened? No change, last time I checked, is better than a free-fall.
If you are testing a new medication to reduce tumors, you want to see what happened to the group that didn't get the test therapy. Maybe this control group experienced rapid tumor growth. Hence, a result where there is no increase in tumor mass in the group receiving the therapy would be considered a very positive outcome.
We run into the same issue with QE. ... Without that control group, we simply don't know. ...

Friday, February 21, 2014

'What Game Theory Means for Economists'

At MoneyWatch:

Explainer: What "game theory" means for economists, by Mark Thoma: Coming upon the term "game theory" this week, your first thought would likely be about the Winter Olympics in Sochi. But here we're going to discuss how game theory applies in economics, where it's widely used in topics far removed from the ski slopes and ice rinks where elite athletes compete. ...

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Friday, February 14, 2014

'Are New Keynesian DSGE Models a Faustian Bargain?'

Simon Wren-Lewis:

 Are New Keynesian DSGE models a Faustian bargain?: Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events. Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build? ...

Saturday, January 25, 2014

'Is Macro Giving Economics a Bad Rap?'

Chris House defends macro:

Is Macro Giving Economics a Bad Rap?: Noah Smith really has it in for macroeconomists. He has recently written an article in The Week in which he claims that macro is one of the weaker fields in economics...

I think the opposite is true. Macro is one of the stronger fields, if not the strongest ... Macro is quite productive and overall quite healthy. There are several distinguishing features of macroeconomics which set it apart from many other areas in economics. In my assessment, along most of these dimensions, macro comes out looking quite good.

First, macroeconomists are constantly comparing models to data. ... Holding theories up to the data is a scary and humiliating step but it is a necessary step if economic science is to make progress. Judged on this basis, macro is to be commended...

Second, in macroeconomics, there is a constant push to quantify theories. That is, there is always an effort to attach meaningful parameter values to the models. You can have any theory you want but at the end of the day, you are interested not only in idea itself, but also in the magnitude of the effects. This is again one of the ways in which macro is quite unlike other fields.

Third, when the models fail (and they always fail eventually), the response of macroeconomists isn’t to simply abandon the model, but rather they highlight the nature of the failure.  ...

Lastly, unlike many other fields, macroeconomists need to have a wide array of skills and familiarity with many sub-fields of economics. As a group, macroeconomists have knowledge of a wide range of analytical techniques, probably better knowledge of history, and greater familiarity and appreciation of economic institutions than the average economist.

In his opening remarks, Noah concedes that macro is “the glamor division of econ”. He’s right. What he doesn’t tell you is that the glamour division is actually doing pretty well. ...

Sunday, January 19, 2014

'Rational Agents: Irrational Markets'

Roger Farmer:

Rational Agents: Irrational Markets: Bob Shiller wrote an interesting piece in today's NY Times on the irrationality of human action. Shiller argues that the economist's conception of human beings as rational is hard to square with the behavior of asset markets.
Although I agree with Shiller, that human action is inadequately captured by the assumptions that most economists make about behavior, I am not convinced that we need to go much beyond the rationality assumption, to understand what causes financial crises or why they are so devastatingly painful for large numbers of people. The assumption that agents maximize utility can get us a very very long way. ...
In my own work, I have shown that the labor market can go very badly wrong even when everybody is rational.  My coauthors and I showed in a recent paper, that the same idea holds in financial markets. Even when individuals are assumed to be rational; the financial markets may function very badly. ...
Miles Kimball and I have both been arguing that stock market fluctuations are inefficient and we both think that government should act to stabilize the asset markets. Miles' position is much closer to that of Bob Shiller; he thinks that agents are not always rational in the sense of Edgeworth. Miles and Bob may well be right. But in my view, the argument for stabilizing asset markets is much stronger. Even if we accept that agents are rational, it does not follow that swings in asset prices are Pareto efficient. But whether the motive arises from irrational people, or irrational markets; Miles and I agree: We can, and should, design an institution that takes advantage of the government's ability to trade on behalf of the unborn. More on that in a future post. ...

Saturday, January 18, 2014

'The Rationality Debate, Simmering in Stockholm'

Robert Shiller:

The Rationality Debate, Simmering in Stockholm, by Robert Shiller, Commentary, NY Times: Are people really rational in their economic decision making? That question divides the economics profession today, and the divisions were evident at the Nobel Week events in Stockholm last month.
There were related questions, too: Does it make sense to suppose that economic decisions or market prices can be modeled in the precise way that mathematical economists have traditionally favored? Or is there some emotionality in all of us that defies such modeling?
This debate isn’t merely academic. It’s fundamental, and the answers affect nearly everyone. Are speculative market booms and busts — like those that led to the recent financial crisis — examples of rational human reactions to new information, or of crazy fads and bubbles? Is it reasonable to base theories of economic behavior, which surely has a rational, calculating component, on the assumption that only that component matters?
The three of us who shared the Nobel in economic science — Eugene F. Fama, Lars Peter Hansen and I — gave very different answers in our Nobel lectures. ...

'Paul Krugman & the Nature of Economics'

Chris Dillow:

Paul Krugman & the nature of economics: Paul Krugman is being accused of hypocrisy for calling for an extension of unemployment benefits when one of his textbooks says "Generous unemployment benefits can increase both structural and frictional unemployment." I think he can be rescued from this charge, if we recognize that economics is not like (some conceptions of) the natural sciences, in that its theories are not universally applicable but rather of only local and temporal validity.
What I mean is that "textbook Krugman" is right in normal times when aggregate demand is highish. In such circumstances, giving people an incentive to find work through lower unemployment benefits can reduce frictional unemployment (the coexistence of vacancies and joblessness) and so increase output and reduce inflation.
But these might well not be normal times. It could well be be that demand for labour is unusually weak; low wage inflation and employment-population ratios suggest as much. In this world, the priority is not so much to reduce frictional unemployment as to reduce "Keynesian unemployment". And increased unemployment benefits - insofar as they are a fiscal expansion - might do this. When "columnist Krugman" says that "enhanced [unemployment insurance] actually creates jobs when the economy is depressed", the emphasis must be upon the last five words.
Indeed, incentivizing people to find work when it is not (so much) available might be worse than pointless. Cutting unemployment benefits might incentivize people to turn to crime rather than legitimate work.
So, it could be that "columnist Krugman" and "textbook Krugman" are both right, but they are describing different states of the world - and different facts require different models...

Friday, January 03, 2014

'Economics as Craft'

Having argued the same thing several times, including recently, I agree with Dani Rodrik:

Economics as craft: I have an article in the IAS’s quarterly publication, the Institute Letter, on the state of Economics.  Despite the evident role of the economics profession in the recent crisis and my critical views on conventional wisdom in globalization and development, my take on the discipline is rather positive.
Where we frequently go wrong as economists is to look for the “one right model” – the single story that provides the best universal explanation. Yet, the strength of economics is that it provides a panoply of context-specific models. The right explanation depends on the situation we find ourselves in. Sometimes the Keynesians are right, sometimes the classicals. Markets work sometimes along the lines of competitive models and sometimes along monopolistic models. The craft of economics consists on being able to diagnose which of the models apply best in a given historical and geographical context. ...

Thursday, January 02, 2014

The Tale of Joe of Metrika

This is semi-related to this post from earlier today. The tale of Joe of Metrika:

... Metrika began as a small village - little more than a coach-stop and a mandatory tavern at a junction in the highway running from the ancient data mines in the South, to the great city of Enlightenment, far to the North. In Metrika, the transporters of data of all types would pause overnight on their long journey; seek refreshment at the tavern; and swap tales of their experiences on the road.

To be fair, the data transporters were more than just humble freight carriers. The raw material that they took from the data mines was largely unprocessed. The vast mountains of raw numbers usually contained valuable gems and nuggets of truth, but typically these were buried from sight. The data transporters used the insights that they gained from their raucous, beer-fired discussions and arguments (known locally as "seminars") with the Metrika yokels locals at the tavern to help them to sift through the data and extract the valuable jewels. With their loads considerably lightened, these "data-miners" then continued on their journey to the City of Enlightenment in a much improved frame of mind, hangovers nothwithstanding!

Over time, the town of Metrika prospered and grew as the talents of its citizens were increasingly recognized and valued by those in the surrounding districts, and by the data miners transporters.

Young Joe grew up happily, supported by his family of econometricians, and he soon developed the skills that were expected of his societal class. He honed his computing skills; developed a good nose for "dodgy" data; and studiously broadened and deepened his understanding of the various tools wielded by the artisans in the neighbouring town of Statsbourg.

In short, he was a model child!

But - he was torn! By the time that he reached the tender age of thirteen, he felt the need to make an important, life-determining, decision.

Should he align his talents with the burly crew who frequented the gym near his home - the macroeconometricians - or should he throw in his lot with the physically challenged bunch of empirical economists known locally as the microeconometricians? ...

Full story here.

'Tribalism, Biology, and Macroeconomics'

Paul Krugman:

Tribalism, Biology, and Macroeconomics: ...Pew has a new report about changing views on evolution. The big takeaway is that a plurality of self-identified Republicans now believe that no evolution whatsoever has taken place since the day of creation... The move is big: an 11-point decline since 2009. ... Democrats are slightly more likely to believe in evolution than they were four years ago.
So what happened after 2009 that might be driving Republican views? The answer is obvious, of course: the election of a Democratic president
Wait — is the theory of evolution somehow related to Obama administration policy? Not that I’m aware of... The point, instead, is that Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists. For some time now it has been impossible to be a good Republicans while believing in the reality of climate change; now it’s impossible to be a good Republican while believing in evolution.
And of course the same thing is happening in economics. As recently as 2004, the Economic Report of the President (pdf) of a Republican administration could espouse a strongly Keynesian view..., the report — presumably written by Greg Mankiw — used the “s-word”, calling for “short-term stimulus”.
Given that intellectual framework, the reemergence of a 30s-type economic situation ... should have made many Republicans more Keynesian than before. Instead, at just the moment that demand-side economics became obviously critical, we saw Republicans — the rank and file, of course, but economists as well — declare their fealty to various forms of supply-side economics, whether Austrian or Lafferian or both. ...
And look, this has to be about tribalism. All the evidence ... has pointed in a Keynesian direction; but Keynes-hatred (and hatred of other economists whose names begin with K) has become a tribal marker, part of what you have to say to be a good Republican.

Before the Great Recession, macroeconomists seemed to be converging to a single intellectual framework. In Olivier Blanchard's famous words:

after the explosion (in both the positive and negative meaning of the word) of the field in the 1970s, there has been enormous progress and substantial convergence. For a while - too long a while - the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good.

The recession revealed that the "extremism, herding, and fashion" is much worse than many of us realized, and the rifts that have reemerged and are as strong as ever. What it didn't reveal is how to move beyond this problem. I thought evidence would matter more than it does, but somehow we seem to have lost the ability to distinguish between competing theoretical structures based upon econometric evidence (if we ever had it). The state of macro is not good, and the path to improvement is hard to see, but it must involve a shared agreement over the evidence based means through which the profession on both sides of these debates can embrace or reject particular theoretrical models.

Thursday, December 19, 2013

'Is Finance Guided by Good Science or Convincing Magic?'

Tim Johnson:

Is finance guided by good science or convincing magic?: Noah Smith posted a piece on "Freshwater vs. Saltwater" divides macro, but not finance.  As a mathematician the substance of the piece wasn't that interesting (a nice explanation is here) but there was a comment from Stephen Williamson that really caught my attention

Another thought. You've persisted with the view that when the science is crappy - whether because of bad data or some kind of bad equilibrium I guess - there is disagreement. ... What's at stake in finance? The flow of resources to finance people comes from Wall Street. All the Wall Street people care about is making money, so good science gets rewarded. I'm not saying that macroeconomic science is bad, only that there are plenty of opportunities for policymakers to be sold schlock macro pseudo-science.

 What I aim to do in this post is offer an explanation for the 'divide' in economics from the perspective of moral philosophy and on this basis argue that finance is not guided by science but by magic. ...

'More on the Illusion of Superiority'

Simon Wren-Lewis:

More on the illusion of superiority: For economists, and those interested in methodology Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.
My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.
But what about the Lucas critique? ...[continue]...

[In Maui, will post as I can...]

Tuesday, December 17, 2013

'Four Missing Ingredients in Macroeconomic Models'

Antonio Fatas:

Four missing ingredients in macroeconomic models: It is refreshing to see top academics questioning some of the assumptions that economists have been using in their models. Krugman, Brad DeLong and many others are opening a methodological debate about what constitute an acceptable economic model and how to validate its predictions. The role of micro foundations, the existence of a natural state towards the economy gravitates,... are all very interesting debates that tend to be ignored (or assumed away) in academic research.

I would like to go further and add a few items to their list... In random order:

1. The business cycle is not symmetric. ... Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Southern Europe is very close to its actual.

2. ...most academic research is produced around models where small and frequent shocks drive economic fluctuations, as opposed to large and infrequent events. The disconnect comes probably from the fact that it is so much easier to write models with small and frequent shocks than having to define a (stochastic?) process for large events. It gets even worse if one thinks that recessions are caused by the dynamics generated during expansions. Most economic models rely on unexpected events to generate crisis, and not on the internal dynamics that precede the crisis.

[A little bit of self-promotion: my paper with Ilian Mihov on the shape and length of recoveries presents some evidence in favor of these two hypothesis.]

3. There has to be more than price rigidity. ...

4. The notion that co-ordination across economic agents matters to explain the dynamics of business cycles receives very limited attention in academic research. ...

I am aware that they are plenty of papers that deal with these four issues, some of them published in the best academic journals. But most of these papers are not mainstream. Most economists are sympathetic to these assumption but avoid writing papers using them because they are afraid they will be told that their assumptions are ad-hoc and that the model does not have enough micro foundations (for the best criticism of this argument, read the latest post of Simon Wren-Lewis). Time for a change?

On the plucking model, see here and here.

Friday, December 13, 2013

Sticky Ideology

Paul Krugman:

Rudi Dornbusch and the Salvation of International Macroeconomics (Wonkish): ...Ken Rogoff had a very good paper on all this, in which he also says something about the state of affairs within the economics profession at the time:

The Chicago-Minnesota School maintained that sticky prices were nonsense and continued to advance this view for at least another fifteen years. It was the dominant view in academic macroeconomics. Certainly, there was a long period in which the assumption of sticky prices was a recipe for instant rejection at many leading journals. Despite the religious conviction among macroeconomic theorists that prices cannot be sticky, the Dornbusch model remained compelling to most practical international macroeconomists. This divergence of views led to a long rift between macroeconomics and much of mainstream international finance …

There are more than a few of us in my generation of international economists who still bear the scars of not being able to publish sticky-price papers during the years of new neoclassical repression.

Notice that this isn’t the evil Krugman talking; it’s the respectable Rogoff. Yet he too is in effect describing neoclassical macro as a sort of cult, actively suppressing alternative approaches. What he gets wrong is in the part I’ve elided with my “…”, in which he asserts that this is all behind us. As we saw when crisis struck, Chicago/Minnesota had in fact learned nothing and was pretty much unaware of the whole New Keynesian enterprise — and from what I hear about macro hiring, the suppression of ideas at odds with the cult remains in full force. ...

Saturday, December 07, 2013

Econometrics and 'Big Data'

I tweeted this link, and it's getting far, far more retweets than I would have expected, so I thought I'd note it here:

Econometrics and "Big Data", by Dave Giles: In this age of "big data" there's a whole new language that econometricians need to learn. ... What do you know about such things as:

  • Decision trees 
  • Support vector machines
  • Neural nets 
  • Deep learning
  • Classification and regression trees
  • Random forests
  • Penalized regression (e.g., the lasso, lars, and elastic nets)
  • Boosting
  • Bagging
  • Spike and slab regression?

Probably not enough!

If you want some motivation to rectify things, a recent paper by Hal Varian ... titled, "Big Data: New Tricks for Econometrics" ... provides an extremely readable introduction to several of these topics.

He also offers a valuable piece of advice:

"I believe that these methods have a lot to offer and should be more widely known and used by economists. In fact, my standard advice to graduate students these days is 'go to the computer science department and take a class in machine learning'."

Wednesday, December 04, 2013

'Microfoundations': I Do Not Think That Word Means What You Think It Means

'Microfoundations': I Do Not Think That Word Means What You Think It Means

Brad DeLong responds to my column on macroeconomic models:

“Microfoundations”: I Do Not Think That Word Means What You Think It Means

The basic point is this:

...New Keynesian models with more or less arbitrary micro foundations are useful for rebutting claims that all is for the best macro economically in this best of all possible macroeconomic worlds. But models with micro foundations are not of use in understanding the real economy unless you have the micro foundations right. And if you have the micro foundations wrong, all you have done is impose restrictions on yourself that prevent you from accurately fitting reality.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? ...
After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Tuesday, December 03, 2013

One Model to Rule Them All?

Latest column:

Is There One Model to Rule Them All?: The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether? ...
The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?  - See more at: http://www.thefiscaltimes.com/Columns/2013/12/03/There-One-Economic-Model-Rule-Them-All#sthash.WPTndtm4.dpuf

Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:

Evans

This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Wednesday, November 27, 2013

'Who Made Economics?'

Daniel Little:

Who made economics?, by Daniel Little: The discipline of economics has a high level of intellectual status, even hegemony, in today’s social sciences — especially in universities in the United States. It also has a very specific set of defining models and theories that distinguish between “good” and “bad” economics. This situation suggests two topics for research: how did political economy and its successors ascend to this position of prestige in the social sciences? And how did this particular mix of techniques, problems, mathematical methods, and exemplar theoretical papers come to define the mainstream discipline? How did this governing disciplinary matrix develop and win the field?

One of the most interesting people taking on questions like these is Marion Fourcade. Her Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s was discussed in an earlier post (link). An early place where she expressed her views on these topics is in her 2001 article, “Politics, Institutional Structures, and the Rise of Economics: A Comparative Study” (link). There she describes the evolution of economics in these terms:

Since the middle of the nineteenth century, the study of the economy has evolved from a loose discursive "field," with no clear and identifiable boundaries, into a fully "professionalized" enterprise, relying on both a coherent and formalized framework, and extensive practical claims in administrative, business, and mass media institutions. (397)

And she argues that this process was contingent, path-dependent, and only loosely guided by a compass of “better” science:

Overall,contrary to the frequent assumption that economics is a universal and universally shared science, there seems to be considerable cross-national variation in (1) the and nature of the institutionalization of an economic knowledge field, (2) the forms of professional action of economists, and (3) intellectual traditions in the of economics. (398)

Fourcade approaches this subject as a sociologist; so she wants to understand the institutional and structural factors that led to the shaping and stabilization of this field of knowledge.

Understanding the relationship between the institutional and intellectual aspects of knowledge production requires, first and foremost, a historical analysis of the conditions under which a coherent domain of discourse and practice was established in the first place. (398)

A key question in this article (and in Economists and Societies) is how the differences that exist between the disciplines of economics in France, Germany, Great Britain, and the US came to be. The core of the answer that she gives rests on her analysis of the relationships that existed between practitioners and the state: "A comparison of the four cases under investigation suggests that the entrenchment of the economics profession was profoundly shaped by the relationship of its practitioners to the larger political institutions and culture of their country" (432). So differences between economics in, say, France and the United States, are to be traced back to the different ways in which academic practitioners of economic analysis and policy recommendations were situated with regard to the institutions of the state.

It is possible to treat the history of ideas internally ("systems of ideas are driven by rational discussion of their implications") and externally ("systems of ideas are driven by the social needs and institutional arrangements of a certain time"). The best sociology of knowledge avoids this dichotomy, allowing for both the idea that a field of thought advances in part through the scientific and conceptual debates that occur within it and the idea that historically specific structures and institutions have important effects on the shape and direction of the development of a field. Fourcade avoids the dichotomy by treating seriously the economic reasoning that took place at a time and place, while also searching out the institutional and structural factors that favored this approach or that in a particular national setting.

This is sociology of knowledge done at a high level of resolution. Fourcade wants to identify the mechanisms through which "societal institutions" influence the production of knowledge in the four country contexts that she studies (Germany, Great Britain, France, and the US). She does not suggest that economics lacks scientific content or that economic debates do not have a rational structure of argument. But she does argue that the configuration of the field itself was not the product of rational scientific advance and discovery, but instead was shaped by the institutions of the university and the exigencies of the societies within which it developed.

Fourcade's own work suggests a different kind of puzzle -- this time in the development of the field of the sociology of knowledge. Fourcade's topic seems to be one that is tailor-made for treatment within the terms of Bourdieu's theory of a field. And in fact some of Fourcade's analysis of the institutional factors that influenced the success or failure of academic economists in Britain, Germany, or the US fits Bourdieu's theory very well. Bourdieu's book Homo Academicus appeared in 1984 in French and 1988 in English. But Fourcade does not make use of Bourdieu's ideas at all in the 2001 article -- some 17 years after Bourdieu's ideas were published. Reference to elements of Bourdieu's approach appears only in the 2009 book. There she writes:

Bourdieu found that the social sciences occupy a very peculiar position among all scientific fields in that external factors play an especially important part in determining these fields' internal stratification and structure of authority.... Within each disciplinary field, the subjective (i.e., agentic) and objective (i.e., structural) positions of individuals are "homologous": in other words, the polar opposition between "economic" and "cultural" capital is replicated at the field's level, and mirrors the orthodoxy/heterodoxy divide. (23)

So why was Bourdieu not considered in the 2001 article? This shift in orientation may be simply a feature of the author's own intellectual development. But it may also be diagnostic of the rise of Bourdieu's influence on the sociology of knowledge in the 90's and 00's. It would be interesting to see a graph of the frequency of references to the book since 1984.

(Gabriel Abend's treatment of the differences that exist between the paradigms of sociology in the United States and Mexico is of interest here as well; link.)

Tuesday, November 05, 2013

Do People Have Rational Expectations?

New column:

Do People Have Rational Expectations?, by Mark Thoma

Not always, and economic models need to take this into account.

Wednesday, October 30, 2013

'Economics, Good and Bad'

Chris Dillow:

Economics, good and bad: Attacks on mainstream economics such as this by Aditya Chakrabortty leave me hopelessly conflicted. ...[explains why he's conflicted]...
The division that matters is not so much between heterodox and mainstream economics, but between good economics and bad. I'll give just two examples of what I mean.
First, good economics tests itself against the facts. What makes Mankiw's defence of the 1% so risible is that it ducks out of the empirical question of whether neoclassical explanations for rising inequality are actually empirically valid. Just because something could be consistent with a theory does not mean that it is.
Secondly, good economics asks: which model (or better, which mechanism or which theory) fits the problem at hand? For example, if your question is "should I invest in this high-charging actively managed fund?" you must at least take the efficient market hypothesis as your starting point. But if you're asking "are markets prone to bubbles?" you might not. As Noah says, the EMH is a great guide for investors, but not so much for policy-makers.
It's in this sense that I don't like pieces like Aditya's. Ordinary everyday economics - of the sort that's useful for real people - isn't about bigthink and meta-theorizing, but about careful consideration of the facts.

Tuesday, October 29, 2013

'Big-Data Men Rewrite Government’s Tired Economic Models'

Via Wired:

The Next Big Thing You Missed: Big-Data Men Rewrite Government’s Tired Economic Models, by Marcus Wohlsen, Wired: The Consumer Price Index is one of the country’s most closely watched economic statistics... The trouble is that it’s compiled by the U.S. government, which is still stuck in the technological dark ages. This month, the index didn’t even arrive on time, thanks to the government shutdown.
David Soloff, co-founder of a San Francisco startup called Premise, believes the country needs something better. He believes we shouldn’t have to rely on the creaky wheels of a government bureaucracy for our vital economic data.
“It’s a half-a-billion dollars of budget allocated toward this in the U.S., and they’re closed,” Soloff said when I met him earlier this month during the depths of the shutdown, before questioning the effectiveness of the system even when it’s up and running. “The U.S…has got a pretty highly evolved stats-gathering infrastructure [compared to other countries], but it’s still kind of post-World War II old-school.”
In Soloff’s view, the government’s highly centralized approach to analyzing the health of the economy isn’t just technologically antiquated. It’s doesn’t take into account how much the rest of the world has been changed by technology. At Premise, the big idea is to measure economic trends around the world on a real-time, granular level, combining the best of machine learning with a small army of on-the-ground human data collectors that can gather new information about our economy as quickly as possible. ...
This is a company of the Big Data Age. ... While its current product offerings are focused on inflation and food security data, Soloff could see the platform expand to answer questions that government bureaucracies don’t touch...

But the article doesn't address the key question of costs of the service and who will have access to these data (e.g. FRED is free to all).

Tuesday, October 15, 2013

Are Rationality and the Efficient Markets Hypothesis Useful?

Just a quick note on the efficient markets hypothesis, rationality, and all that. I view these as important contributions not because they are accurate descriptions of the world (though they may come close in some cases), but rather because they give us an important benchmark to measure departures from an ideal world. It's somewhat like studying the effects of gravity in an idealized system with no wind, etc. -- in a vacuum -- as a first step. If people say, yes, but it's always windy here, then we can account for those effects (though if we are dropping 100 pound weighs from 10 feet accounting for wind may not matter much, but if we are dropping something light from a much higher distance then we would need to incorporate these forces). Same for the efficient markets hypothesis and rationality. If people say, if effect, but it's always windy here -- those models miss important behavioral effects, e.g., -- then the models need to be amended appropriately (though, like dropping heavy weights short distances in the wind, some markets may act close enough to idealized conditions to allow these models to be used). We have not done enough to amend models to account for departures from the ideal, but that doesn't mean the ideal models aren't useful benchmarks.

Anyway, just a quick thought...

Saturday, October 12, 2013

'Nominal Wage Rigidity in Macro: An Example of Methodological Failure'

Simon Wren-Lewis:

Nominal wage rigidity in macro: an example of methodological failure: This post develops a point made by Bryan Caplan (HT MT). I have two stock complaints about the dominance of the microfoundations approach in macro. Neither imply that the microfoundations approach is ‘fundamentally flawed’ or should be abandoned: I still learn useful things from building DSGE models. My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by macroeconomists is so out of date. Behavioural economics just does not get a look in. A good and very important example comes from the reluctance of firms to cut nominal wages. There is overwhelming empirical evidence for this phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer Smith at Warwick). The behavioral reasons for this are explored in detail in this book by Truman Bewley, which Bryan Caplan discusses here. Both money illusion and the importance of workforce morale are now well accepted ideas in behavioral economics.

Yet debates among macroeconomists about whether and why wages are sticky go on. ...

While we can debate why this is at the level of general methodology, the importance of this particular example to current policy is huge. Many have argued that the failure of inflation to fall further in the recession is evidence that the output gap is not that large. As Paul Krugman in particular has repeatedly suggested, the reluctance of workers or firms to cut nominal wages may mean that inflation could be much more sticky at very low levels, so the current behavior of inflation is not inconsistent with a large output gap. ... Yet this is hardly a new discovery, so why is macro having to rediscover these basic empirical truths? ...

He goes on to give an example of why this matters (failure to incorporate downward nominal wage rigidity caused policymakers to underestimate the size of the output gap by a large margin, and that led to a suboptimal policy response).

Time for me to catch a plane ...

Wednesday, September 25, 2013

How Bad Data Warped the Picture of the Jobs Recovery

Matt O'Brien:

How Bad Data Warped Everything We Thought We Knew About the Jobs Recovery: You know something is really boring when economists say it is.
That's what I thought to myself when the economists at the Brookings Institution's Panel on Economic Activity said only the "serious" ones would stick around for the last paper on seasonal adjustmentzzzzzzz...
... but a funny thing happened on the way to catching up on sleep. It turns out seasonal adjustments are really interesting! They explain why, ever since Lehmangeddon, the economy has looked like it's speeding up in the winter and slowing down in the summer.
In other words, everything you've read about "Recovery Winter" the past few winters has just been a statistical artifact of naïve seasonal adjustments. Oops. ...
The BLS only looks at the past 3 years to figure out what a "typical" September (or October or November, etc.) looks like. So, if there's, say, a once-in-three-generations financial crisis in the fall, it could throw off the seasonal adjustments for quite awhile. Which is, of course, exactly what happened. ...
And that messed things up for years. Because the BLS's model thought the job losses from the financial crisis were just from winter, it thought those kind of job losses would happen every winter. And, like any good seasonal model, it tried to smooth them out. So it added jobs it shouldn't have to future winters to make up for what it expected would be big seasonal job losses. And it subtracted jobs it shouldn't have from the summer to do so. ...
Now, the one bit of good news here is this effect has already faded away for the most part. Remember, the BLS only looks back at the past 3 years of data when it comes up with its seasonal adjustments -- so the Lehman panic has fallen out of the sample.
Here are two words we should retire: Recovery Winter. It was never a thing. The economy wasn't actually accelerating when the days got shorter, nor was it decelerating when the days got longer. ... The BLS can, and should, do better.

Wednesday, September 18, 2013

Against ‘Blackboard Economics’

This is from Vox EU:

Finding his own way: Ronald Coase (1910-2013), by Steven Medema, Vox EU: Ronald Coase’s contributions to economics were much broader than most economists recognize. His work was characterized by a rejection of ‘blackboard economics’ in favor of detailed case studies and a comparative analysis of real-world institutions. This column argues that the ‘Coase theorem’ as commonly understood is in fact antithetical to Coase’s approach to economics.
...
Against ‘blackboard economics’
Coase’s criticisms of the theory of economic policy were part of a larger critique of what he often referred to as ‘blackboard economics’ – an economics where curves are shifted and equations are manipulated, with little attention to the correspondence between the theory and the real world, or to the institutions that might bear on the analysis. A similar set of concerns led to his skepticism about the application of economic analysis beyond its traditional boundaries. Contrary to popular misperception, Coase had precious little interest in the economic analysis of law. Instead, Coase’s ‘law and economics’ was concerned with how law affected the functioning of the economic system.
It is ironic, then, that the idea most closely associated with Coase, the ‘Coase theorem’, is in many respects the height of ‘blackboard economics’ and a cornerstone of the economic analysis of law. Being misunderstood was something of a hallmark of Coase’s career, as he pointed out on any number of occasions. We should all be so fortunate.

Tuesday, August 27, 2013

The Real Trouble With Economics: Sociology

Paul Krugman:

The Real Trouble With Economics: I’m a bit behind the curve in commenting on the Rosenberg-Curtain piece on economics as a non-science. What do I think of their thesis?

Well, I’m sorry to say that they’ve gotten it almost all wrong. Only “almost”: they’re entirely right that economics isn’t behaving like a science, and economists – macroeconomists, anyway – definitely aren’t behaving like scientists. But they misunderstand the nature of the failure, and for that matter the nature of such successes as we’re having....

It’s true that few economists predicted the onset of crisis. Once crisis struck, however, basic macroeconomic models did a very good job in key respects — in particular, they did much better than people who relied on their intuitive feelings. The intuitionists — remember, Alan Greenspan was supposed to be famously able to sense the economy’s pulse — insisted that budget deficits would send interest rates soaring, that the expansion of the Fed’s balance sheet would be inflationary, that fiscal austerity would strengthen economies through “confidence”. Meanwhile, wonks who relied on suitably interpreted IS-LM confidently declared that all this intuition, based on experiences in a different environment, would prove wrong — and they were right. From my point of view, these past 5 years have been a triumph for and vindication of economic modeling.

Oh, and it would be a real tragedy if the takeaway from recent events becomes that you should listen to impressive-looking guys with good tailors who stroke their chins and sound wise, and ignore the nerds; the nerds have been mostly right, while the Very Serious People have been wrong every step of the way.

Yet obviously something is deeply wrong with economics. While economists using textbook macro models got things mostly and impressively right, many famous economists refused to use those models — in fact, they made it clear in discussion that they didn’t understand points that had been worked out generations ago. Moreover, it’s hard to find any economists who changed their minds when their predictions, say of sharply higher inflation, turned out wrong. ...

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

I talked about the problem with the sociology of economics awhile back -- this is from a post in August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange's Robert Lucas Roundtable in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless, so we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.

But we have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. For example,... why policymakers didn't take the possibility of a major meltdown seriously. Why didn't they deliver forecasts conditional on a crisis occurring? Why didn't they ask this question of the model? Why did we only get forecasts conditional on no crisis? And also, why was the main factor that allowed the crisis to spread, the interconnectedness of financial markets, missed?

It was because policymakers couldn't and didn't take seriously the possibility that a crisis and meltdown could occur. And even if they had seriously considered the possibility of a meltdown, the models most people were using were not built to be informative on this question. It simply wasn't a question that was taken seriously by the mainstream.

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices.

And here I think that thought leaders such as Robert Lucas and others who openly ridiculed models they disagreed with have questions they should ask themselves (e.g. Mr Lucas saying "At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another", or more recently "These are kind of schlock economics"). When someone as notable and respected as Robert Lucas makes fun of an entire line of inquiry, it influences whole generations of economists away from asking certain types of questions, some of which turned out to be important. Why was it necessary for the major leaders in macroeconomics to shut down alternative lines of inquiry through ridicule and other means rather than simply citing evidence in support of their positions? What were they afraid of? The goal is to find the truth, not win fame and fortune by dominating the debate.

We need to take a close look at how the sociology of our profession led to an outcome where people were made to feel embarrassed for even asking certain types of questions. People will always be passionate in defense of their life's work, so it's not the rhetoric itself that is of concern, the problem comes when factors such as ideology or control of journals and other outlets for the dissemination of research stand in the way of promising alternative lines of inquiry.

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

I think the structure of journals, which concentrates power within the profession, also influence the sociology of the profession (and not in a good way).

Wednesday, August 07, 2013

(1) Numerical Methods, (2) Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman

Robert Waldmann:

...Another thing, what about numerical methods?  Macro was totally taken over by computer simulations. This liberated it (so that anything could happen) but also ruined the fun. When computers were new and scary, simulation based macro was scary and high status. When everyone can do it, setting up a model and simulating just doesn't demonstrate brains as effectively as finding one of the two or three special cases with closed form solutions and then presenting them. Also simulating unrealistic models is really pointless. People end up staring at the computer output and trying to think up stories which explain what went on in the computer. If one is reduced to that, one might as well look at real data. Models which can't be solved don't clarify thought. Since they also don't fit the data, they are really truly madly useless.

And one more:

Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman: Thoma Bait
I might as well be honest. I am posting this here rather than at rjwaldmann.blogspot.com , because I think it is the sort of thing to which Mark Thoma links and my standing among the bears is based entirely on the fact that Thoma occasionally links to me.
I think that Pigou, Samuelson, Solow and Friedman all assumed that the marginal propensity to consume out of wealth must, on average, be higher for nominal creditors than for nominal debtors. I think this is a gross error which shows how the representative consumer (invented by Samuelson) had done devastating damage already by 1960.
The topic is the Pigou effect versus the liquidity trap. ...

Guess I should send you there to read it.

Friday, July 12, 2013

'Revolutionizing Economics by Evolutionizing it'

Apparently, economics is about to be revolutionized:

Revolutionizing Economics by Evolutionizing it, by Jab Bhalla, Scientific American: Economics will soon be revolutionized, by being evolutionized, again. This time with fewer unnaturally selective ideas. Scholars, like those working with the Evolution Institute, are adapting the assumptions, methods, and goals of economics to better fit empirically observed humans. Our survival foreseeably requires it. ...

I see these stories periodically, sometimes it's physicists, this time it's evolutionary theorists, but somehow the revolution never comes. Perhaps it's because many of them don't actually understand economics. For example:

The appetites and capacities of everything in biology are physiologically limited. Beyond some satiety level, more isn’t better—it’s often unhealthy and counterproductive. By contrast, self-interest in economics is considered limitless. Every extra dollar gained is better. But that’s a numerical illusion. An extra dollar’s benefit is by circumstances (an idea used only peripherally, e.g. in diminishing returns). Unlimitedness is deeply unnatural. It ignores the foreseeable capacities of systems we depend on.

Diminishing marginal utility is only a peripheral idea? Microeconomists have never heard of or considered satiety and bliss points? And apparently we've never heard of market failure either:

...economic self-interest has become equally unintelligent. It’s often self-undermining, as in Prisoner’s Dilemma games, and in the global climate “tragedy of the commons.”

Yes, we just ignore the tragedy of the commons and have done no work at all to model it theoretically, and then figure out how best to regulate these situations.

I am open to new ideas, and I don't have problems with statements like this:

 Limits, intelligent foresight, self-deficiency, interdependent coordination, and relational rationality are in our nature. They should all be in our economics. And in our rational self-interest, rightly understood. Economics needn’t be as dumb as trees or as self-harming as over-hunters.

But acting like we have learned nothing from behavioral economics, ignored it entirely, whatever, is wrong. Critics should at least understand the basics.

We cannot build one grand model to fit all situations. It would be intractable. Instead, we build models to answer specific questions. The trick is knowing when to use a particular model. Often -- most of the time -- the standard microeconmic model with no bliss points, rational agents, pure competition, and so on does pretty well at predicting behavior. That's why it's still around. But we know very well that this model doesn't always work (and I am distinguishing between micro and macro here -- the focus in the article seems to be on micro issues). Other times we may need to use a model with a bliss point, a departure from textbook rationality, or a market failure of some type. We have a pretty good idea about how to do that (though I think we fall down a bit in knowing precisely when to alter the rationality assumption to incorporate what we have learned from behavioral economics).

Finally, contrary to the impression the article gives, we we don't ignore evolution as this 1996 article from Paul Krugman, and many more by many others, will attest:

What Economists Can Learn from Evolutionary Theorists, European Association for Evolutionary Political Economy, Paul Krugman, Nov. 1996: Good morning. I am both honored and a bit nervous to be speaking to a group devoted to the idea of evolutionary political economy. As you probably know, I am not exactly an evolutionary economist. I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations.
Why, then, am I here? Well, partly because my research work has taken me to some of the edges of the neoclassical paradigm. When you are concerned, as I have been, with situations in which increasing returns are crucial, you must drop the assumption of perfect competition; you are also forced to abandon the belief that market outcomes are necessarily optimal, or indeed that the market can be said to maximize anything. You can still believe in maximizing individuals and some kind of equilibrium, but the complexity of the situations in which your imaginary agents find themselves often obliges you - and presumably them - to represent their behavior by some kind of ad hoc rule rather than as the outcome of a carefully specified maximum problem. And you are often driven by sheer force of modeling necessity to think of the economy as having at least vaguely "evolutionary" dynamics, in which initial conditions and accidents along the way may determine where you end up. Some of you may have read my work on economic geography; I only found out after I had worked on the models for some time that I was using "replicator dynamics" to discuss the problem of economic change.
But there is another reason I am here. I am an economist, but I am also what we might call an evolution groupie. That is, I spend a great deal of time reading what evolutionary biologists write - not only the more popular volumes but the textbooks and, most recently, some of the professional articles. I have even tried to talk to some of the biologists, which in this age of narrow specialization is a major effort. My interest in evolution is partly a recreation; but it is also true that I find in evolutionary biology a useful vantage point from which to view my own specialty in a new perspective. In a way, the point is that both the parallels and the differences between economics and evolutionary biology help me at least to understand what I am doing when I do economics - to get, to be pompous about it, a new perspective on the epistemology of the two fields.
I am sure that I am not unique either in my interest in biology or in my feeling that we economists have something to learn from it. Indeed, I am sure that many people in this room know far more about evolutionary theory than I do. But I may have one special distinction. Most economists who try to apply evolutionary concepts start from some deep dissatisfaction with economics as it is. I won't say that I am entirely happy with the state of economics. But let us be honest: I have done very well within the world of conventional economics. I have pushed the envelope, but not broken it, and have received very widespread acceptance for my ideas. What this means is that I may have more sympathy for standard economics than most of you. My criticisms are those of someone who loves the field and has seen that affection repaid. I don't know if that makes me morally better or worse than someone who criticizes from outside, but anyway it makes me different.
Anyway, enough preliminaries. ...

Continue reading "'Revolutionizing Economics by Evolutionizing it'" »

Tuesday, July 02, 2013

Economics Is Better (In Some Ways) Than It Used To Be

Frances Woolley (there is also a discussion of each point and counterpoint):

Economics is Better (in some ways) Than it Used to Be: The discipline of economics is in better shape today than it was in the 1970s, 80s and 90s. Here are five reasons why:
1. Now, economists test their theories. ...
2. Now, economists are better at establishing causality. ...
3. Now, economics is (somewhat) more open to a range of ideologies and methodologies ...
4. Now, economics is engaging ...
5. Now, economic research is (in some ways) more democratic. ...
Still, every silver lining has a cloud. Each one of these positive trends has a not-so-positive flip side.
1*. Economists test their theories. But only some test results get published. ...
2*. Causality isn't everything. Correlations are interesting too. ...
3*. Economics is not that open to methodological diversity. ...
4*. Public engagement makes the senior administration happy, but no one ever got tenure by blogging. ...
5*. Old barriers have been replaced by new ones. ...
So the profession is far from perfect. But it is better than it used to be. ...