Category Archive for: Methodology [Return to Main]

Thursday, October 23, 2014

'How Mainstream Economic Thinking Imperils America'

Tuesday, October 21, 2014

'Why our Happiness and Satisfaction Should Replace GDP in Policy Making'

Richard Easterlin:

Why our happiness and satisfaction should replace GDP in policy making, The Conversation: Since 1990, GDP per person in China has doubled and then redoubled. With average incomes multiplying fourfold in little more than two decades, one might expect many of the Chinese people to be dancing in the streets. Yet, when asked about their satisfaction with life, they are, if anything, less satisfied than in 1990.
The disparity indicated by these two measures of human progress, Gross Domestic Product and Subjective Well Being (SWB), makes pretty plain the issue at hand. GDP, the well-being indicator commonly used in policy circles, signals an outstanding advance in China. SWB, as indicated by self-reports of overall satisfaction with life, suggests, if anything, a worsening of people’s lives. Which measure is a more meaningful index of well-being? Which is a better guide for public policy?
A few decades ago, economists – the most influential social scientists shaping public policy – would have said that the SWB result for China demonstrates the meaninglessness of self-reports of well-being. Economic historian Deirdre McCloskey, writing in 1983, aptly put the typical attitude of economists this way:
Unlike other social scientists, economists are extremely hostile towards questionnaires and other self-descriptions… One can literally get an audience of economists to laugh out loud by proposing ironically to send out a questionnaire on some disputed economic point. Economists… are unthinkingly committed to the notion that only the externally observable behaviour of actors is admissible evidence in arguments concerning economics.
Culture clash
But times have changed. A commission established by the then French president, Nicolas Sarkozy in 2008 and charged with recommending alternatives to GDP as a measure of progress, stated bluntly (my emphasis):
Research has shown that it is possible to collect meaningful and reliable data on subjective as well as objective well-being … The types of questions that have proved their value within small-scale and unofficial surveys should be included in larger-scale surveys undertaken by official statistical offices.
This 25-member commission was comprised almost entirely of economists, five of whom had won the Nobel Prize in economics. Two of the five co-chaired the commission.
These days the tendency with new measures of our well-being – such as life satisfaction and happiness – is to propose that they be used as a complement to GDP. But what is one to do when confronted with such a stark difference between SWB and GDP, as in China? What should one say? People in China are better off than ever before, people are no better off than before, or “it depends”?
Commonalities
To decide this issue, we need to delve deeper into what has happened in China. When we do that, the superiority of SWB becomes apparent: it can capture the multiple dimensions of people’s lives. GDP, in contrast, focuses exclusively on the output of material goods.
People everywhere in the world spend most of their time trying to earn a living and raise a healthy family. The easier it is for them to do this, the happier they are. This is the lesson of a 1965 classic, The Pattern of Human Concerns, by public opinion survey specialist Hadley Cantril. In the 12 countries – rich and poor, communist and non-communist – that Cantril surveyed, the same highly personal concerns dominated determinants of happiness: standard of living, family, health and work. Broad social issues such as inequality, discrimination and international relations, were rarely mentioned.
Urban China in 1990 was essentially a mini-welfare state. Workers had what has been called an “iron rice bowl” – they were assured of jobs, housing, medical services, pensions, childcare and jobs for their grown children.
With the coming of capitalism, and “restructuring” of state enterprises, the iron rice bowl was smashed and these assurances went out the window. Unemployment soared and the social safety net disappeared. The security that workers had enjoyed was gone and the result was that life satisfaction plummeted, especially among the less-educated, lower-income segments of the population.
Although working conditions have improved somewhat in the past decade, the shortfall from the security enjoyed in 1990 remains substantial. The positive effect on well-being of rising incomes has been negated by rapidly rising material aspirations and the emergence of urgent concerns about income and job security, family, and health.
The case to replace
Examples of the disparity between SWB and GDP as measures of well-being could easily be multiplied. Since the early 1970s real GDP per capita in the US has doubled, but SWB has, if anything, declined. In international comparisons, Costa Rica’s per capita GDP is a quarter of that in the US, but Costa Ricans are as happy or happier than Americans when we look at SWB data. Clearly there is more to people’s well-being that the output of goods.
There are some simple, yet powerful arguments to say that we should use SWB in preference to GDP, not just as a complement. For a start, those SWB measures like happiness or life satisfaction are more comprehensive than GDP. They take into account the effect on well-being not just of material living conditions, but of the wide range of concerns in our lives.
It is also key that with SWB, the judgement of well-being is made by the individuals affected. GDP’s reliance on outside statistical “experts” to make inferences based on a measure they themselves construct looks deeply flawed when viewed in comparison. These judgements by outsiders also lie behind the growing number of multiple-item measures being put forth these days. An example is the United Nations’ Human Development Index (HDI) which attempts to combine data on GDP with indexes of education and life expectancy.
But people do not identify with measures like HDI (or GDP, of course) to anywhere near the extent that they do with straightforward questions of happiness and satisfaction with life. And crucially, these SWB measures offer each adult a vote and only one vote, whether they are rich or poor, sick or well, native or foreign-born. This is not to say that, as measures of well-being go, SWB is the last word, but clearly it comes closer to capturing what is actually happening to people’s lives than GDP ever will. The question is whether policy makers actually want to know.

Thursday, October 16, 2014

'Regret and Economic Decision-Making'

Here are the conclusions to Regret and economic decision-making:

Conclusions We are clearly a long way from fully understanding how people behave in dynamic contexts. But our experimental data and that of earlier studies (Lohrenz 2007) suggest that regret is a part of the decision process and should not be overlooked. From a theoretical perspective, our work shows that regret aversion and counterfactual thinking make subtle predictions about behaviour in settings where past events serve as benchmarks. They are most vividly illustrated in the investment context.
Our theoretical findings show that if regret is anticipated, investors may keep their hands off risky investments, such as stocks, and not enter the market in the first place. Thus, anticipated regret aversion acts like a surrogate for higher risk aversion.
In contrast, once people have invested, they become very attached to their investment. Moreover, the better past performance was, the higher their commitment, because losses loom larger. This leads the investor to ‘gamble for resurrection’. In our experimental data, we very often observe exactly this pattern.
This dichotomy between ex ante and ex post risk appetites can be harmful for investors. It leads investors and businesses to escalate their commitment because of the sunk costs in their investments. For example, many investors missed out on the 2009 stock market rally while buckling down in the crash in 2007/2008, reluctant to sell early. Similarly, people who quit their jobs and invested their savings into their own business, often cannot with a cold, clear eye cut their losses and admit their business has failed.
Therefore, a better understanding of what motivates people to save and invest could enable us to help them avoid such mistakes, e.g. through educating people to set up clear budgets a priori or to impose a drop dead level for their losses. Such simple measures may help mitigate the effects of harmful emotional attachment and support individuals in making better decisions.

[This ("once people have invested, they become very attached to their investment" and cannot admit failure) includes investment in economic models and research (see previous post).]

Tuesday, October 14, 2014

Economics is Both Positive and Normative

Jean Tirole in the latest TSE Magazine:

Economics is a positive discipline as it aims to document and analyse individual and collective behaviours. It is also, and more importantly, a normative discipline as its main goal is to better the world through economic policies and recommendations.

Friday, September 26, 2014

'The New Classical Clique'

Paul Krugman continues the conversation on New Classical economics::

The New Classical Clique: Simon Wren-Lewis thinks some more about macroeconomics gone astray; Robert J. Waldmann weighs in. For those new to this conversation, the question is why starting in the 1970s much of academic macroeconomics was taken over by a school of thought that began by denying any useful role for policies to raise demand in a slump, and eventually coalesced around denial that the demand side of the economy has any role in causing slumps.
I was a grad student and then an assistant professor as this was happening, albeit doing international economics – and international macro went in a different direction, for reasons I’ll get to in a bit. So I have some sense of what was really going on. And while both Wren-Lewis and Waldmann hit on most of the main points, neither I think gets at the important role of personal self-interest. New classical macro was and still is many things – an ideological bludgeon against liberals, a showcase for fancy math, a haven for people who want some kind of intellectual purity in a messy world. But it’s also a self-promoting clique. ...

Wednesday, September 24, 2014

Where and When Macroeconomics Went Wrong

Simon Wren-Lewis:

Where macroeconomics went wrong: In my view, the answer is in the 1970/80s with the New Classical revolution (NCR). However I also think the new ideas that came with that revolution were progressive. I have defended rational expectations, I think intertemporal theory is the right place to start in thinking about consumption, and exploring the implications of time inconsistency is very important to macro policy, as well as many other areas of economics. I also think, along with nearly all macroeconomists, that the microfoundations approach to macro (DSGE models) is a progressive research strategy.
That is why discussion about these issues can become so confused. New Classical economics made academic macroeconomics take a number of big steps forward, but a couple of big steps backward at the same time. The clue to the backward steps comes from the name NCR. The research program was anti-Keynesian (hence New Classical), and it did not want microfounded macro to be an alternative to the then dominant existing methodology, it wanted to replace it (hence revolution). Because the revolution succeeded (although the victory over Keynesian ideas was temporary), generations of students were taught that Keynesian economics was out of date. They were not taught about the pros and cons of the old and new methodologies, but were taught that the old methodology was simply wrong. And that teaching was/is a problem because it itself is wrong. ...

Thursday, September 11, 2014

'Trapped in the ''Dark Corners'''?

A small part of Brad DeLong's response to Olivier Blanchard. I posted a shortened version of Blanchard's argument a week or two ago:

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

And I responded:

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Here's part of Brad's response:

But… but… but… Macroeconomic policy and financial regulation are not set in such a way as to maintain a healthy distance from dark corners. We are still in a dark corner now. There is no sign of the 4% per year inflation target, the commitments to do what it takes via quantitative easing and rate guidance to attain it, or a fiscal policy that recognizes how the rules of the game are different for reserve currency printing sovereigns when r < n+g. Thus not only are we still in a dark corner, but there is every reason to believe that should we get out the sub-2% per year effective inflation targets of North Atlantic central banks and the inappropriate rhetoric and groupthink surrounding fiscal policy makes it highly likely that we will soon get back into yet another dark corner. Blanchard’s pragmatic answer is thus the most unpragmatic thing imaginable: the “if” test fails, and so the “then” part of the argument seems to me to be simply inoperative. Perhaps on another planet in which North Atlantic central banks and governments aggressively pursued 6% per year nominal GDP growth targets Blanchard’s answer would be “pragmatic”. But we are not on that planet, are we?

Moreover, even were we on Planet Pragmatic, it still seems to be wrong. Using current or any visible future DSGE models for forecasting and mainstream scenario planning makes no sense: the DSGE framework imposes restrictions on the allowable emergent properties of the aggregate time series that are routinely rejected at whatever level of frequentist statistical confidence that one cares to specify. The right road is that of Christopher Sims: that of forecasting and scenario planning using relatively instructured time-series methods that use rather than ignore the correlations in the recent historical data. And for policy evaluation? One should take the historical correlations and argue why reverse-causation and errors-in-variables lead them to underestimate or overestimate policy effects, and possibly get it right. One should not impose a structural DSGE model that identifies the effects of policies but certainly gets it wrong. Sims won that argument. Why do so few people recognize his victory?

Blanchard continues:

Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage…

For the second task, the question is: whose models of tail risk based on what traditions get to count in the tail risks discussion?

And missing is the third task: understanding what Paul Krugman calls the “Dark Age of macroeconomics”, that jahiliyyah that descended on so much of the economic research, economic policy analysis, and economic policymaking communities starting in the fall of 2007, and in which the center of gravity of our economic policymakers still dwell.

Sunday, August 31, 2014

'Where Danger Lurks'

Olivier Blanchard (a much shortened version of his arguments, the entire piece is worth reading):

Where Danger Lurks: Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment. ...
That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again, or at least not in advanced economies thanks to their sound economic policies. ... We all knew that there were “dark corners”—situations in which the economy could badly malfunction. But we thought we were far away from those corners, and could for the most part ignore them. ...
The main lesson of the crisis is that we were much closer to those dark corners than we thought—and the corners were even darker than we had thought too. ...
How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models...? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?
Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate. Another class of economic models, aimed at measuring systemic risk, can be used to give warning signals that we are getting too close to dark corners, and that steps must be taken to reduce risk and increase distance. Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage.
The crisis has been immensely painful. But one of its silver linings has been to jolt macroeconomics and macroeconomic policy. The main policy lesson is a simple one: Stay away from dark corners.

That may be the best we can do for now (have separate models for normal times and "dark corners"), but an integrated model would be preferable. An integrated model would, for example, be better for conducting "policy and financial regulation ... to maintain a healthy distance from dark corners," and our aspirations ought to include models that can explain both normal and abnormal times. That may mean moving beyond the DSGE class of models, or perhaps the technical reach of DSGE models can be extended to incorporate the kinds of problems that can lead to Great Recessions, but we shouldn't be satisfied with models of normal times that cannot explain and anticipate major economic problems.

Tuesday, August 26, 2014

A Reason to Question the Official Unemployment Rate

[Still on the road ... three quick ones before another long day of driving.]

David Leonhardt:

A New Reason to Question the Official Unemployment Rate: ...A new academic paper suggests that the unemployment rate appears to have become less accurate over the last two decades, in part because of this rise in nonresponse. In particular, there seems to have been an increase in the number of people who once would have qualified as officially unemployed and today are considered out of the labor force, neither working nor looking for work.
The trend obviously matters for its own sake: It suggests that the official unemployment rate – 6.2 percent in July – understates the extent of economic pain in the country today. ... The new paper is a reminder that the unemployment rate deserves less attention than it often receives.
Yet the research also relates to a larger phenomenon. The declining response rate to surveys of almost all kinds is among the biggest problems in the social sciences. ...
Why are people less willing to respond? The rise of caller ID and the decline of landlines play a role. But they’re not the only reasons. Americans’ trust in institutions – including government, the media, churches, banks, labor unions and schools – has fallen in recent decades. People seem more dubious of a survey’s purpose and more worried about intrusions into their privacy than in the past.
“People are skeptical – Is this a real survey? What they are asking me?” Francis Horvath, of the Labor Department, says. ...

Tuesday, August 19, 2014

The Agent-Based Method

Rajiv Sethi:

The Agent-Based Method: It's nice to see some attention being paid to agent-based computational models on economics blogs, but Chris House has managed to misrepresent the methodology so completely that his post is likely to do more harm than good. 

In comparing the agent-based method to the more standard dynamic stochastic general equilibrium (DSGE) approach, House begins as follows:

Probably the most important distinguishing feature is that, in an ABM, the interactions are governed by rules of behavior that the modeler simply encodes directly into the system individuals who populate the environment.

So far so good, although I would not have used the qualifier "simply", since encoded rules can be highly complex. For instance, an ABM that seeks to describe the trading process in an asset market may have multiple participant types (liquidity, information, and high-frequency traders for instance) and some of these may be using extremely sophisticated strategies.

How does this approach compare with DSGE models? House argues that the key difference lies in assumptions about rationality and self-interest:

People who write down DSGE models don’t do that. Instead, they make assumptions on what people want. They also place assumptions on the constraints people face. Based on the combination of goals and constraints, the behavior is derived. The reason that economists set up their theories this way – by making assumptions about goals and then drawing conclusions about behavior – is that they are following in the central tradition of all of economics, namely that allocations and decisions and choices are guided by self-interest. This goes all the way back to Adam Smith and it’s the organizing philosophy of all economics. Decisions and actions in such an environment are all made with an eye towards achieving some goal or some objective. For consumers this is typically utility maximization – a purely subjective assessment of well-being.  For firms, the objective is typically profit maximization. This is exactly where rationality enters into economics. Rationality means that the “agents” that inhabit an economic system make choices based on their own preferences.

This, to say the least, is grossly misleading. The rules encoded in an ABM could easily specify what individuals want and then proceed from there. For instance, we could start from the premise that our high-frequency traders want to maximize profits. They can only do this by submitting orders of various types, the consequences of which will depend on the orders placed by others. Each agent can have a highly sophisticated strategy that maps historical data, including the current order book, into new orders. The strategy can be sensitive to beliefs about the stream of income that will be derived from ownership of the asset over a given horizon, and may also be sensitive to beliefs about the strategies in use by others. Agents can be as sophisticated and forward-looking in their pursuit of self-interest in an ABM as you care to make them; they can even be set up to make choices based on solutions to dynamic programming problems, provided that these are based on private beliefs about the future that change endogenously over time. 

What you cannot have in an ABM is the assumption that, from the outset, individual plans are mutually consistent. That is, you cannot simply assume that the economy is tracing out an equilibrium path. The agent-based approach is at heart a model of disequilibrium dynamics, in which the mutual consistency of plans, if it arises at all, has to do so endogenously through a clearly specified adjustment process. This is the key difference between the ABM and DSGE approaches, and it's right there in the acronym of the latter.

A typical (though not universal) feature of agent-based models is an evolutionary process, that allows successful strategies to proliferate over time at the expense of less successful ones. Since success itself is frequency dependent---the payoffs to a strategy depend on the prevailing distribution of strategies in the population---we have strong feedback between behavior and environment. Returning to the example of trading, an arbitrage-based strategy may be highly profitable when rare but much less so when prevalent. This rich feedback between environment and behavior, with the distribution of strategies determining the environment faced by each, and the payoffs to each strategy determining changes in their composition, is a fundamental feature of agent-based models. In failing to understand this, House makes claims that are close to being the opposite of the truth: 

Ironically, eliminating rational behavior also eliminates an important source of feedback – namely the feedback from the environment to behavior.  This type of two-way feedback is prevalent in economics and it’s why equilibria of economic models are often the solutions to fixed-point mappings. Agents make choices based on the features of the economy.  The features of the economy in turn depend on the choices of the agents. This gives us a circularity which needs to be resolved in standard models. This circularity is cut in the ABMs however since the choice functions do not depend on the environment. This is somewhat ironic since many of the critics of economics stress such feedback loops as important mechanisms.

It is absolutely true that dynamics in agent-based models do not require the computation of fixed points, but this is a strength rather than a weakness, and has nothing to do with the absence of feedback effects. These effects arise dynamically in calendar time, not through some mystical process by which coordination is instantaneously achieved and continuously maintained. 

It's worth thinking about how the learning literature in macroeconomics, dating back to Marcet and Sargent and substantially advanced by Evans and Honkapohja fits into this schema. Such learning models drop the assumption that beliefs continuously satisfy mutual consistency, and therefore take a small step towards the ABM approach. But it really is a small step, since a great deal of coordination continues to be assumed. For instance, in the canonical learning model, there is a parameter about which learning occurs, and the system is self-referential in that beliefs about the parameter determine its realized value. This allows for the possibility that individuals may hold incorrect beliefs, but limits quite severely---and more importantly, exogenously---the structure of such errors. This is done for understandable reasons of tractability, and allows for analytical solutions and convergence results to be obtained. But there is way too much coordination in beliefs across individuals assumed for this to be considered part of the ABM family.

The title of House's post asks (in response to an earlier piece by Mark Buchanan) whether agent-based models really are the future of the discipline. I have argued previously that they are enormously promising, but face one major methodological obstacle that needs to be overcome. This is the problem of quality control: unlike papers in empirical fields (where causal identification is paramount) or in theory (where robustness is key) there is no set of criteria, widely agreed upon, that can allow a referee to determine whether a given set of simulation results provides a deep and generalizable insight into the workings of the economy. One of the most celebrated agent-based models in economics---the Schelling segregation model---is also among the very earliest. Effective and acclaimed recent exemplars are in short supply, though there is certainly research effort at the highest levels pointed in this direction. The claim that such models can displace the equilibrium approach entirely is much too grandiose, but they should be able to find ample space alongside more orthodox approaches in time. 

---

The example of interacting trading strategies in this post wasn't pulled out of thin air; market ecology has been a recurrent theme on this blog. In ongoing work with Yeon-Koo Che and Jinwoo Kim, I am exploring the interaction of trading strategies in asset markets, with the goal of addressing some questions about the impact on volatility and welfare of high-frequency trading. We have found the agent-based approach very useful in thinking about these questions, and I'll present some preliminary results at a session on the methodology at the Rethinking Economics conference in New York next month. The event is free and open to the public but seating is limited and registration required. 

Wednesday, August 13, 2014

'Unemployment Fluctuations are Mainly Driven by Aggregate Demand Shocks'

Do the facts have a Keynesian bias?:

Using product- and labour-market tightness to understand unemployment, by Pascal Michaillat and Emmanuel Saez, Vox EU: For the five years from December 2008 to November 2013, the US unemployment rate remained above 7%, peaking at 10% in October 2009. This period of high unemployment is not well understood. Macroeconomists have proposed a number of explanations for the extent and persistence of unemployment during the period, including:

  • High mismatch caused by major shocks to the financial and housing sectors,
  • Low search effort from unemployed workers triggered by long extensions of unemployment insurance benefits, and
  • Low aggregate demand caused by a sudden need to repay debts or pessimism, but no consensus has been reached.

In our opinion this lack of consensus is due to a gap in macroeconomic theory: we do not have a model that is rich enough to account for the many factors driving unemployment – including aggregate demand – and simple enough to lend itself to pencil-and-paper analysis. ...

In Michaillat and Saez (2014), we develop a new model of unemployment fluctuations to inspect the mechanisms behind unemployment fluctuations. The model can be seen as an equilibrium version of the Barro-Grossman model. It retains the architecture of the Barro-Grossman model but replaces the disequilibrium framework on the product and labour markets with an equilibrium matching framework. ...

Through the lens of our simple model, the empirical evidence suggests that price and real wage are somewhat rigid, and that unemployment fluctuations are mainly driven by aggregate demand shocks.

Tuesday, August 12, 2014

Why Do Macroeconomists Disagree?

I have a new column:

Why Do Macroeconomists Disagree?, by Mark Thoma, The Fiscal Times: On August 9, 2007, the French Bank BNP Paribus halted redemptions to three investment funds active in US mortgage markets due to severe liquidity problems, an event that many mark as the beginning of the financial crisis. Now, just over seven years later, economists still can’t agree on what caused the crisis, why it was so severe, and why the recovery has been so slow. We can’t even agree on the extent to which modern macroeconomic models failed, or if they failed at all.
The lack of a consensus within the profession on the economics of the Great Recession, one of the most significant economic events in recent memory, provides a window into the state of macroeconomics as a science. ...

Monday, August 11, 2014

'Inflation in the Great Recession and New Keynesian Models'

From the NY Fed's Liberty Street Economics:

Inflation in the Great Recession and New Keynesian Models, by Marco Del Negro, Marc Giannoni, Raiden Hasegawa, and Frank Schorfheide: Since the financial crisis of 2007-08 and the Great Recession, many commentators have been baffled by the “missing deflation” in the face of a large and persistent amount of slack in the economy. Some prominent academics have argued that existing models cannot properly account for the evolution of inflation during and following the crisis. For example, in his American Economic Association presidential address, Robert E. Hall called for a fundamental reconsideration of Phillips curve models and their modern incarnation—so-called dynamic stochastic general equilibrium (DSGE) models—in which inflation depends on a measure of slack in economic activity. The argument is that such theories should have predicted more and more disinflation as long as the unemployment rate remained above a natural rate of, say, 6 percent. Since inflation declined somewhat in 2009, and then remained positive, Hall concludes that such theories based on a concept of slack must be wrong.        
In an NBER working paper and a New York Fed staff report (forthcoming in the American Economic Journal: Macroeconomics), we use a standard New Keynesian DSGE model with financial frictions to explain the behavior of output and inflation since the crisis. This model was estimated using data up to 2008. We find that following the increase in financial stress in 2008, the model successfully predicts not only the sharp contraction in economic activity, but also only a modest decline in inflation. ...

Wednesday, July 23, 2014

'Wall Street Skips Economics Class'

The discussion continues:

Wall Street Skips Economics Class, by Noah Smith: If you care at all about what academic macroeconomists are cooking up (or if you do any macro investing), you might want to check out the latest economics blog discussion about the big change that happened in the late '70s and early '80s. Here’s a post by the University of Chicago economist John Cochrane, and here’s one by Oxford’s Simon Wren-Lewis that includes links to most of the other contributions.
In case you don’t know the background, here’s the short version...

Saturday, July 19, 2014

'Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?'

Brad DeLong:

Is Choosing to Believe in Economic Models a Rational Expected-Utility Decision Theory Thing?: I have always understood expected-utility decision theory to be normative, not positive: it is how people ought to behave if they want to achieve their goals in risky environments, not how people do behave. One of the chief purposes of teaching expected-utility decision theory is in fact to make people aware that they really should be risk neutral over small gambles where they do know the probabilities--that they will be happier and achieve more of their goals in the long run if they in fact do so. ...[continue]...

Here's the bottom line:

(6) Given that people aren't rational Bayesian expected utility-theory decision makers, what do economists think that they are doing modeling markets as if they are populated by agents who are? Here there are, I think, three answers:

  • Most economists are clueless, and have not thought about these issues at all.

  • Some economists think that we have developed cognitive institutions and routines in organizations that make organizations expected-utility-theory decision makers even though the individuals in utility theory are not. (Yeah, right: I find this very amusing too.)

  • Some economists admit that the failure of individuals to follow expected-utility decision theory and our inability to build institutions that properly compensate for our cognitive biases (cough, actively-managed mutual funds, anyone?) are one of the major sources of market failure in the world today--for one thing, they blow the efficient market hypothesis in finance sky-high.

The fact that so few economists are in the third camp--and that any economists are in the second camp--makes me agree 100% with Andrew Gelman's strictures on economics as akin to Ptolemaic astronomy, in which the fundamentals of the model are "not [first-order] approximations to something real, they’re just fictions..."

Monday, July 14, 2014

Is There a Phillips Curve? If So, Which One?

One place that Paul Krugman and Chris House disagree is on the Phillips curve. Krugman (responding to a post by House) says:

New Keynesians do stuff like one-period-ahead price setting or Calvo pricing, in which prices are revised randomly. Practicing Keynesians have tended to rely on “accelerationist” Phillips curves in which unemployment determined the rate of change rather than the level of inflation.
So what has happened since 2008 is that both of these approaches have been found wanting: inflation has dropped, but stayed positive despite high unemployment. What the data actually look like is an old-fashioned non-expectations Phillips curve. And there are a couple of popular stories about why: downward wage rigidity even in the long run, anchored expectations.

House responds:

What the data actually look like is an old-fashioned non-expectations Phillips curve. 
OK, here is where we disagree. Certainly this is not true for the data overall. It seems like Paul is thinking that the system governing the relationship between inflation and output changes between something with essentially a vertical slope (a “Classical Phillips curve”) and a nearly flat slope (a “Keynesian Phillips Curve”). I doubt that this will fit the data particularly well and it would still seem to open the door to a large role for “supply shocks” – shocks that neither Paul nor I think play a big role in business cycles.

Simon Wren-Lewis also has something to say about this in his post from earlier today, Has the Great Recession killed the traditional Phillips Curve?:

Before the New Classical revolution there was the Friedman/Phelps Phillips Curve (FPPC), which said that current inflation depended on some measure of the output/unemployment gap and the expected value of current inflation (with a unit coefficient). Expectations of inflation were modelled as some function of past inflation (e.g. adaptive expectations) - at its simplest just one lag in inflation. Therefore in practice inflation depended on lagged inflation and the output gap.
After the New Classical revolution came the New Keynesian Phillips Curve (NKPC), which had current inflation depending on some measure of the output/unemployment gap and the expected value of inflation in the next period. If this was combined with adaptive expectations, it would amount to much the same thing as the FPPC, but instead it was normally combined with rational expectations, where agents made their best guess at what inflation would be next period using all relevant information. This would include past inflation, but it would include other things as well, like prospects for output and any official inflation target.
Which better describes the data? ...
[W]e can see why some ... studies (like this for the US) can claim that recent inflation experience is consistent with the NKPC. It seems much more difficult to square this experience with the traditional adaptive expectations Phillips curve. As I suggested at the beginning, this is really a test of whether rational expectations is a better description of reality than adaptive expectations. But I know the conclusion I draw from the data will upset some people, so I look forward to a more sophisticated empirical analysis showing why I’m wrong.

I don't have much to add, except to say that this is an empirical question that will be difficult to resolve empirically (because there are so many different ways to estimate a Phillips curve, and different specifications give different answers, e.g. which measure of prices to use, which measure of aggregate activity to use, what time period to use and how to handle structural and policy breaks during the period that is chosen, how should natural rates be extracted from the data, how to handle non-stationarities, if we measure aggregate activity with the unemployment rate, do we exclude the long-term unemployed as recent research suggests, how many lags should be included, etc., etc.?).

Sunday, July 13, 2014

New Classical Economics as Modeling Strategy

Judy Klein emails a response to a recent post of mine based upon Simon Wren Lewis's post “Rereading Lucas and Sargent 1979”:

Lucas and Sargent’s, “After Keynesian Macroeconomics,” was presented at the 1978 Boston Federal Reserve Conference on “After the Phillips Curve: Persistence of High Inflation and High Unemployment.” Although the title of the conference dealt with stagflation, the rational expectations theorists saw themselves countering one technical revolution with another.

The Keynesian Revolution was, in the form in which it succeeded in the United States, a revolution in method. This was not Keynes’s intent, nor is it the view of all of his most eminent followers. Yet if one does not view the revolution in this way, it is impossible to account for some of its most important features: the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy. [Lucas and Sargent, 1979, pg. 50]

The Lucas papers at the Economists' Papers Project at the University of Duke reveal the preliminary planning for the 1978 presentation. Lucas and Sargent decided that it would be a “rhetorical piece… to convince others that the old-fashioned macro game is up…in a way which makes it clear that the difficulties are fatal”; it’s theme would be the “death of macroeconomics” and the desirability of replacing it with an “Aggregative Economics” whose foundation was “equilibrium theory.” (Lucas letter to Sargent February 9, 1978). Their 1978 presentation was replete, as their discussant Bob Solow pointed out, with the planned rhetorical barbs against Keynesian economics of “wildly incorrect," "fundamentally flawed," "wreckage," "failure," "fatal," "of no value," "dire implications," "failure on a grand scale," "spectacular recent failure," "no hope." The empirical backdrop to Lucas and Sargent’s death decree on Keynesian economics was evident in the subtitle of the conference: “Persistence of High Inflation and High Unemployment.”

Although they seized the opportunity to comment on policy failure and the high misery-index economy, Lucas and Sargent shifted the macroeconomic court of judgment from the economy to microeconomics. They fought a technical battle over the types of restrictions used by modelers to identify their structural models. Identification-rendering restrictions were essential to making both the Keynesian and rational expectations models “work” in policy applications, but Lucas and Sargent defined the ultimate terms of success not with regard to a model’s capacity for empirical explanation or achievement of a desirable policy outcome, but rather with regard to the model’s capacity to incorporate optimization and equilibrium – to aggregate consistently rational individuals and cleared markets.

In the macroeconomic history written by the victors, the Keynesian revolution and the rational expectations revolution were both technical revolutions, and one could delineate the sides of the battle line in the second revolution by the nature of the restricting assumptions that enabled the model identification that licensed policy prescription. The rational expectations revolution, however, was also a revolution in the prime referential framework for judging macroeconomic model fitness for going forth and multiplying; the consistency of the assumptions – the equation restrictions - with optimizing microeconomics and mathematical statistical theory, rather than end uses of explaining the economy and empirical statistics, constituted the new paramount selection criteria.

Some of the new classical macroeconomists have been explicit about the narrowness of their revolution. For example, Sargent noted in 2008, “While rational expectations is often thought of as a school of economic thought, it is better regarded as a ubiquitous modeling technique used widely throughout economics.” In an interview with Arjo Klamer in 1983, Robert Townsend asserted that “New classical economics means a modeling strategy.”

It is no coincidence, however, that in this modeling narrative of economic equilibrium crafted in the Cold War era, Adam Smith’s invisible hand morphs into a welfare-maximizing “hypothetical ‘benevolent social planner’” (Lucas, Prescott, Stokey 1989) enforcing a “communism of models” (Sargent 2007) and decreeing to individual agents the mutually consistent rules of action that become the equilibrating driving force. Indeed, a long-term Office of Naval Research grant for “Planning & Control of Industrial Operations” awarded to the Carnegie Institutes of Technology’s Graduate School of Industrial Administration had funded Herbert Simon’s articulation of his certainty equivalence theorem and John Muth’s study of rational expectations. It is ironic that a decade-long government planning contract employing Carnegie professors and graduate students underwrote the two key modeling strategies for the Nobel-prize winning demonstration that the rationality of consumers renders government intervention to increase employment unnecessary and harmful.

Friday, July 11, 2014

'Rereading Lucas and Sargent 1979'

Simon Wren-Lewis with a nice follow-up to an earlier discussion:

Rereading Lucas and Sargent 1979: Mainly for macroeconomists and those interested in macroeconomic thought
Following this little interchange (me, Mark Thoma, Paul Krugman, Noah Smith, Robert Waldman, Arnold Kling), I reread what could be regarded as the New Classical manifesto: Lucas and Sargent’s ‘After Keynesian Economics’ (hereafter LS). It deserves to be cited as a classic, both for the quality of ideas and the persuasiveness of the writing. It does not seem like something written 35 ago, which is perhaps an indication of how influential its ideas still are.
What I want to explore is whether this manifesto for the New Classical counter revolution was mainly about stagflation, or whether it was mainly about methodology. LS kick off their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is I think crucial. If the counter revolution is all about stagflation, we might expect an account of why conventional theory failed to predict stagflation - the equivalent, perhaps, to the discussion of classical theory in the General Theory. Instead we get something much more general - a discussion of why identification restrictions typically imposed in the structural econometric models (SEMs) of the time are incredible from a theoretical point of view, and an outline of the Lucas critique.
In other words, the essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed. SEMs cannot be trusted as a guide for policy. In only one paragraph do LS try to link this general critique to stagflation...[continue]...

Friday, July 04, 2014

Responses to John Cochrane's Attack on New Keynesian Models

The opening quote from chapter 2 of Mankiw's intermediate macro textbook:

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to fit facts. — Sherlock Holmes

Or, instead of "before one has data," change it to "It is a capital mistake to theorize without knowledge of the data" and it's a pretty good summary of Paul Krugman's response to John Cochrane:

Macro Debates and the Relevance of Intellectual History: One of the interesting things about the ongoing economic crisis is the way it has demonstrated the importance of historical knowledge. ... But it’s not just economic history that turns out to be extremely relevant; intellectual history — the history of economic thought — turns out to be relevant too.
Consider, in particular, the recent to-and-fro about stagflation and the rise of new classical macroeconomics. You might think that this is just economist navel-gazing; but you’d be wrong.
To see why, consider John Cochrane’s latest. ... Cochrane’s current argument ... effectively depends on the notion that there must have been very good reasons for the rejection of Keynesianism, and that harkening back to old ideas must involve some kind of intellectual regression. And that’s where it’s important — as David Glasner notes — to understand what really happened in the 70s.
The point is that the new classical revolution in macroeconomics was not a classic scientific revolution, in which an old theory failed crucial empirical tests and was supplanted by a new theory that did better. The rejection of Keynes was driven by a quest for purity, not an inability to explain the data — and when the new models clearly failed the test of experience, the new classicals just dug in deeper. They didn’t back down even when people like Chris Sims (pdf), using the very kinds of time-series methods they introduced, found that they strongly pointed to a demand-side model of economic fluctuations.
And critiques like Cochrane’s continue to show a curious lack of interest in evidence. ... In short, you have a much better sense of what’s really going on here, and which ideas remain relevant, if you know about the unhappy history of macroeconomic thought.

Nick Rowe:

Insufficient Demand vs?? Uncertainty: ...John Cochrane says: "John Taylor, Stanford's Nick Bloom and Chicago Booth's Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago's Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth." ...
Increased political uncertainty would reduce aggregate demand. Plus, positive feedback processes could amplify that initial reduction in aggregate demand. Even those who were not directly affected by that increased political uncertainty would reduce their own willingness to hire lend or invest because of that initial reduction in aggregate demand, plus their own uncertainty about aggregate demand. So the average person or firm might respond to a survey by saying that insufficient demand was the problem in their particular case, and not the political uncertainty which caused it.
But the demand-side problem could still be prevented by an appropriate monetary policy response. Sure, there would be supply-side effects too. And it would be very hard empirically to estimate the relative magnitudes of those demand-side vs supply-side effects. ...
So it's not just an either/or thing. Nor is it even a bit-of-one-plus-bit-of-the-other thing. Increased political uncertainty can cause a recession via its effect on demand. Unless monetary policy responds appropriately. (And that, of course, would mean targeting NGDP, because inflation targeting doesn't work when supply-side shocks cause adverse shifts in the Short Run Phillips Curve.)

On whether supply or demand shocks are the source of aggregate fluctuations, Blanchard and Quah (1989), Shapiro and Watson (1988), and others had it right (though the identifying restriction that aggregate demand shocks do not have permanent effects seems to be undermined by the Great Recession ). It's not an eithor/or question, it's a matter of figuring out how much of the variation in GDP/employment is due to supply shocks, and how much is due to demand shocks. And as Nick Rowe points out with his example, sorting between these two causes can be very difficult -- identifying which type of shock is driving changes in aggregate variables is not at all easy and depends upon particular assumptions. Nevertheless, my reading of the empirical evidence is much like Krugman's. Overall, across all these papers, it is demand shocks that play the most prominent role. Supply shocks do matter, but not nearly so much as demand shocks when it comes to explaining aggregate fluctuations.

Saturday, June 28, 2014

The Rise and Fall of the New Classical Model

Simon Wren-Lewis (my comments are at the end):

Understanding the New Classical revolution: In the account of the history of macroeconomic thought I gave here, the New Classical counter revolution was both methodological and ideological in nature. It was successful, I suggested, because too many economists were unhappy with the gulf between the methodology used in much of microeconomics, and the methodology of macroeconomics at the time.
There is a much simpler reading. Just as the original Keynesian revolution was caused by massive empirical failure (the Great Depression), the New Classical revolution was caused by the Keynesian failure of the 1970s: stagflation. An example of this reading is in this piece by the philosopher Alex Rosenberg (HT Diane Coyle). He writes: “Back then it was the New Classical macrotheory that gave the right answers and explained what the matter with the Keynesian models was.”
I just do not think that is right. Stagflation is very easily explained: you just need an ‘accelerationist’ Phillips curve (i.e. where the coefficient on expected inflation is one), plus a period in which monetary policymakers systematically underestimate the natural rate of unemployment. You do not need rational expectations, or any of the other innovations introduced by New Classical economists.
No doubt the inflation of the 1970s made the macroeconomic status quo unattractive. But I do not think the basic appeal of New Classical ideas lay in their better predictive ability. The attraction of rational expectations was not that it explained actual expectations data better than some form of adaptive scheme. Instead it just seemed more consistent with the general idea of rationality that economists used all the time. Ricardian Equivalence was not successful because the data revealed that tax cuts had no impact on consumption - in fact study after study have shown that tax cuts do have a significant impact on consumption.
Stagflation did not kill IS-LM. In fact, because empirical validity was so central to the methodology of macroeconomics at the time, it adapted to stagflation very quickly. This gave a boost to the policy of monetarism, but this used the same IS-LM framework. If you want to find the decisive event that led to New Classical economists winning their counterrevolution, it was the theoretical realisation that if expectations were rational, but inflation was described by an accelerationist Phillips curve with expectations about current inflation on the right hand side, then deviations from the natural rate had to be random. The fatal flaw in the Keynesian/Monetarist theory of the 1970s was theoretical rather than empirical.

I agree with this, so let me add to it by talking about what led to the end of the New Classical revolution (see here for a discussion of the properties of New Classical, New Keynesian, and Real Business Cycle Models). The biggest factor was empirical validity. Although some versions of the New Classical model allowed monetary non-neutrality (e.g. King 1982, JPE), when three factors are present, continuous market clearing, rational expectations, and the natural rate hypothesis, monetary neutrality is generally present in these models. Initially work from people like Barrow found strong support for the prediction of these models that only unanticipated changes in monetary policy can affect real variables like output, but subsequent work and eventually the weight of the evidence pointed in the other direction. Both expected and unexpected changes in the money supply appeared to matter in contrast to a key prediction of the New Classical framework.

A second factor that worked against New Classical models is that they had difficulty explaining both the duration and magnitude of actual business cycles. If the reaction to an unexpected policy shock was focused in a single period, the magnitude could be matched, but not the duration. If the shock was spread over 3-5 years to match the duration, the magnitude of cycles could not be matched. Movements in macroeconomic variables arising from informational errors (unexpected policy shocks) did not have enough "power" to capture both aspects of actual business cycles.

The other factor that worked against these models was that information problems were a key factor in generating swings in GDP and employment, and these variations were costly in aggregate. Yet no markets for information appeared to resolve this problem. For those who believe in the power of markets, and many proponents of New Classical models were also market fundamentalists, the lack of markets for information was a problem.

The New Classical model had displaced the Keynesian model for the reasons highlighted above, but the failure of the New Classical model left the door open for the New Keynesian model to emerge (it appeared to be more consistent with the empirical evidence on the effects of changes in the money supply, and in other areas as well, e.g. the correlation between productivity and economic activity).

But while the New Classical revolution was relatively short-lived as macro models go, it left two important legacies, rational expectations and microfoundations (as well as better knowledge about how non-neutralities might arise, in essence the New Keynesian model drops continuous market clearing through the assumption of short-run price rigidities, and about how to model information sets). Rightly or wrongly, all subsequent models had to have these two elements present within them (RE and microfoundaions), or they would be dismissed.

Thursday, June 26, 2014

Why DSGEs Crash During Crises

David Hendry and Grayham Mizon with an important point about DSGE models:

Why DSGEs crash during crises, by David F. Hendry and Grayham E. Mizon: Many central banks rely on dynamic stochastic general equilibrium models – known as DSGEs to cognoscenti. This column – which is more technical than most Vox columns – argues that the models’ mathematical basis fails when crises shift the underlying distributions of shocks. Specifically, the linchpin ‘law of iterated expectations’ fails, so economic analyses involving conditional expectations and inter-temporal derivations also fail. Like a fire station that automatically burns down whenever a big fire starts, DSGEs become unreliable when they are most needed.

Here's the introduction:

In most aspects of their lives humans must plan forwards. They take decisions today that affect their future in complex interactions with the decisions of others. When taking such decisions, the available information is only ever a subset of the universe of past and present information, as no individual or group of individuals can be aware of all the relevant information. Hence, views or expectations about the future, relevant for their decisions, use a partial information set, formally expressed as a conditional expectation given the available information.
Moreover, all such views are predicated on there being no unanticipated future changes in the environment pertinent to the decision. This is formally captured in the concept of ‘stationarity’. Without stationarity, good outcomes based on conditional expectations could not be achieved consistently. Fortunately, there are periods of stability when insights into the way that past events unfolded can assist in planning for the future.
The world, however, is far from completely stationary. Unanticipated events occur, and they cannot be dealt with using standard data-transformation techniques such as differencing, or by taking linear combinations, or ratios. In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models. ...[continue]...

Update: [nerdy] Reply to Hendry and Mizon: we have DSGE models with time-varying parameters and variances.

Tuesday, June 24, 2014

'Was the Neoclassical Synthesis Unstable?'

The last paragraph from a much longer argument by Simon Wren-Lewis:

Was the neoclassical synthesis unstable?: ... Of course we have moved on from the 1980s. Yet in some respects we have not moved very far. With the counter revolution we swung from one methodological extreme to the other, and we have not moved much since. The admissibility of models still depends on their theoretical consistency rather than consistency with evidence. It is still seen as more important when building models of the business cycle to allow for the endogeneity of labour supply than to allow for involuntary unemployment. What this means is that many macroeconomists who think they are just ‘taking theory seriously’ are in fact applying a particular theoretical view which happens to suit the ideology of the counter revolutionaries. The key to changing that is to first accept it.

Saturday, May 10, 2014

Replication in Economics

Thomas Kneib sent me the details of this project in early April after a discussion about it with one of his Ph.D. students (Jan Höffler) at the INET conference, and I've been meaning to post something on it but have been negligent. So I'm glad that Dave Giles picked up the slack:

Replication in Economics: I was pleased to receive an email today, alerting me to the "Replication in Economics" wiki at the University of Göttingen:

"My name is Jan H. Höffler, I have been working on a replication project funded by the Institute for New Economic Thinking during the last two years and found your blog that I find very interesting. I like very much that you link to data and code related to what you write about. I thought you might be interested in the following:

We developed a wiki website that serves as a database of empirical studies, the availability of replication material for them and of replication studies: http://replication.uni-goettingen.de

It can help for research as well as for teaching replication to students. We taught seminars at several faculties internationally - also in Canada, at UofT - for which the information of this database was used. In the starting phase the focus was on some leading journals in economics, and we now cover more than 1800 empirical studies and 142 replications. Replication results can be published as replication working papers of the University of Göttingen's Center for Statistics.

Teaching and providing access to information will raise awareness for the need for replications, provide a basis for research about the reasons why replications so often fail and how this can be changed, and educate future generations of economists about how to make research replicable.

I would be very grateful if you could take a look at our website, give us feedback, register and vote which studies should be replicated – votes are anonymous. If you could also help us to spread the message about this project, this would be most appreciated."

I'm more than happy to spread the word, Jan. I've requested an account, and I'll definitely be getting involved with your project. This look like a great venture!

Friday, May 09, 2014

Economists and Methodology

Simon Wren-Lewis:

Economists and methodology: ...very few economists write much about methodology. This would be understandable if economics was just like some other discipline where methodological discussion was routine. This is not the case. Economics is not like the physical sciences for well known reasons. Yet economics is not like most other social sciences either: it is highly deductive, highly abstractive (in the non-philosophical sense) and rarely holistic. ...
This is a long winded way of saying that the methodology used by economics is interesting because it is unusual. Yet, as I say, you will generally not find economists writing about methodology. One reason for this is ... a feeling that the methodology being used is unproblematic, and therefore requires little discussion.
I cannot help giving the example of macroeconomics to show that this view is quite wrong. The methodology of macroeconomics in the 1960s was heavily evidence based. Microeconomics was used to suggest aggregate relationships, but not to determine them. Consistency with the data (using some chosen set of econometric criteria) often governed what was or was not allowed in a parameterised (numerical) model, or even a theoretical model. It was a methodology that some interpreted as Popperian. The methodology of macroeconomics now is very different. Consistency with microeconomic theory governs what is in a DSGE model, and evidence plays a much more indirect role. Now I have only a limited knowledge of the philosophy of science..., but I know enough to recognise this as an important methodological change. Yet I find many macroeconomists just assume that their methodology is unproblematic, because it is what everyone mainstream currently does. ...
... The classic example of an economist writing about methodology is Friedman’s Essays in Positive Economics. This puts forward an instrumentalist view: the idea that realism of assumptions do not matter, it is results that count.
Yet does instrumentalism describe Friedman’s major contributions to macroeconomics? Well one of those was the expectations augmented Phillips curve. ... Friedman argued that the coefficient on expected inflation should be one. His main reason for doing so was not that such an adaptation predicted better, but because it was based on better assumptions about what workers were interested in: real rather nominal wages. In other words, it was based on more realistic assumptions. ...
Economists do not think enough about their own methodology. This means economists are often not familiar with methodological discussion, which implies that using what they write on the subject as evidence about what they do can be misleading. Yet most methodological discussion of economics is (and should be) about what economists do, rather than what they think they do. That is why I find that the more interesting and accurate methodological writing on economics looks at the models and methods economists actually use...

Monday, May 05, 2014

'Refocusing Economics Education'

Antonio Fatás (each of the four points below are explained in detail in the original post):

Refocusing economics education: Via Mark Thoma I read an interesting article about how the mainstream economics curriculum needs to be revamped (Wren-Lewis also has some nice thoughts on this issue).

I am sympathetic to some of the arguments made in those posts and the need for some serious rethinking of the way economics is taught but I would put the emphasis on slightly different arguments. First, I  am not sure the recent global crisis should be the main reason to change the economics curriculum. Yes, economists failed to predict many aspects of the crisis but my view is that it was not because of the lack of tools or understanding. We have enough models in economics that explain most of the phenomena that caused and propagated the global financial crisis. There are plenty of models where individuals are not rational, where financial markets are driven by bubbles, with multiple equilbria,... that one can use to understand the last decade. We do have all these tools but as economics teachers (and researchers) we need to choose which ones to focus on. And here is where we failed. And we did it before and during the crisis but we also did it earlier. Why aren't we focusing on the right models or methodology? Here is my list of mistakes we do in our teaching, which might also reflect on our research:

#1 Too much theory, not enough emphasis on explaining empirical phenomena. ...

#2 Too many counterintuitive results. Economists like to teach things that are surprising. ...

#3 The need for a unified theory. ...

#4 We teach what our audience wants to hear. ...

I also believe the sociology within the profession needs to change.

Thursday, March 27, 2014

'The Misuse of Theoretical Models in Finance and Economics'

 Stanford University's Paul Pfleiderer:

Chameleons: The Misuse of Theoretical Models in Finance and Economics, by Paul Pfleiderer, March 2014: Abstract In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy. I discuss how chameleons are created and nurtured by the mistaken notion that one should not judge a model by its assumptions, by the unfounded argument that models should have equal standing until definitive empirical tests are conducted, and by misplaced appeals to “as-if” arguments, mathematical elegance, subtlety, references to assumptions that are “standard in the literature,” and the need for tractability.

Sunday, March 23, 2014

On Greg Mankiw's 'Do No Harm'

A rebuttal to Greg Mankiw's claim that the government should not interfere in voluntary exchanges. This is from Rakesh Vohra at Theory of the Leisure Class:

Do No Harm & Minimum Wage: In the March 23rd edition of the NY Times Mankiw proposes a 'do no harm' test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test. ...

There is an immediate 'heart strings' argument against the test, because indentured servitude passes the 'do no harm' test. ... I want to focus instead on two other aspects of the 'do no harm' principle contained in the words 'voluntarily'and 'benefit'. What is voluntary and benefit compared to what? ...

When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this...

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.

[There is also an example based upon differences in patience that I left out.]

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

'The Counter-Factual & the Fed’s QE'

I tried to make this point in a recent column (it was about fiscal rather than monetary policy, but the same point applies), but I think Barry Ritholtz makes the point better and more succinctly:

Understanding Why You Think QE Didn't Work, by Barry Ritholtz: Maybe you have heard a line that goes something like this: The weak recovery is proof that the Federal Reserve’s program of asset purchases, otherwise known as quantitative easement, doesn't work.
If you were the one saying those words, you don't understand the counterfactual. ...
This flawed analytical paradigm has many manifestations, and not just in the investing world. They all rely on the same equation: If you do X, and there is no measurable change, X is therefore ineffective.
The problem with this “non-result result” is what would have occurred otherwise. Might “no change” be an improvement from what otherwise would have happened? No change, last time I checked, is better than a free-fall.
If you are testing a new medication to reduce tumors, you want to see what happened to the group that didn't get the test therapy. Maybe this control group experienced rapid tumor growth. Hence, a result where there is no increase in tumor mass in the group receiving the therapy would be considered a very positive outcome.
We run into the same issue with QE. ... Without that control group, we simply don't know. ...

Friday, February 21, 2014

'What Game Theory Means for Economists'

At MoneyWatch:

Explainer: What "game theory" means for economists, by Mark Thoma: Coming upon the term "game theory" this week, your first thought would likely be about the Winter Olympics in Sochi. But here we're going to discuss how game theory applies in economics, where it's widely used in topics far removed from the ski slopes and ice rinks where elite athletes compete. ...

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Friday, February 14, 2014

'Are New Keynesian DSGE Models a Faustian Bargain?'

Simon Wren-Lewis:

 Are New Keynesian DSGE models a Faustian bargain?: Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events. Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build? ...

Saturday, January 25, 2014

'Is Macro Giving Economics a Bad Rap?'

Chris House defends macro:

Is Macro Giving Economics a Bad Rap?: Noah Smith really has it in for macroeconomists. He has recently written an article in The Week in which he claims that macro is one of the weaker fields in economics...

I think the opposite is true. Macro is one of the stronger fields, if not the strongest ... Macro is quite productive and overall quite healthy. There are several distinguishing features of macroeconomics which set it apart from many other areas in economics. In my assessment, along most of these dimensions, macro comes out looking quite good.

First, macroeconomists are constantly comparing models to data. ... Holding theories up to the data is a scary and humiliating step but it is a necessary step if economic science is to make progress. Judged on this basis, macro is to be commended...

Second, in macroeconomics, there is a constant push to quantify theories. That is, there is always an effort to attach meaningful parameter values to the models. You can have any theory you want but at the end of the day, you are interested not only in idea itself, but also in the magnitude of the effects. This is again one of the ways in which macro is quite unlike other fields.

Third, when the models fail (and they always fail eventually), the response of macroeconomists isn’t to simply abandon the model, but rather they highlight the nature of the failure.  ...

Lastly, unlike many other fields, macroeconomists need to have a wide array of skills and familiarity with many sub-fields of economics. As a group, macroeconomists have knowledge of a wide range of analytical techniques, probably better knowledge of history, and greater familiarity and appreciation of economic institutions than the average economist.

In his opening remarks, Noah concedes that macro is “the glamor division of econ”. He’s right. What he doesn’t tell you is that the glamour division is actually doing pretty well. ...

Sunday, January 19, 2014

'Rational Agents: Irrational Markets'

Roger Farmer:

Rational Agents: Irrational Markets: Bob Shiller wrote an interesting piece in today's NY Times on the irrationality of human action. Shiller argues that the economist's conception of human beings as rational is hard to square with the behavior of asset markets.
Although I agree with Shiller, that human action is inadequately captured by the assumptions that most economists make about behavior, I am not convinced that we need to go much beyond the rationality assumption, to understand what causes financial crises or why they are so devastatingly painful for large numbers of people. The assumption that agents maximize utility can get us a very very long way. ...
In my own work, I have shown that the labor market can go very badly wrong even when everybody is rational.  My coauthors and I showed in a recent paper, that the same idea holds in financial markets. Even when individuals are assumed to be rational; the financial markets may function very badly. ...
Miles Kimball and I have both been arguing that stock market fluctuations are inefficient and we both think that government should act to stabilize the asset markets. Miles' position is much closer to that of Bob Shiller; he thinks that agents are not always rational in the sense of Edgeworth. Miles and Bob may well be right. But in my view, the argument for stabilizing asset markets is much stronger. Even if we accept that agents are rational, it does not follow that swings in asset prices are Pareto efficient. But whether the motive arises from irrational people, or irrational markets; Miles and I agree: We can, and should, design an institution that takes advantage of the government's ability to trade on behalf of the unborn. More on that in a future post. ...

Saturday, January 18, 2014

'The Rationality Debate, Simmering in Stockholm'

Robert Shiller:

The Rationality Debate, Simmering in Stockholm, by Robert Shiller, Commentary, NY Times: Are people really rational in their economic decision making? That question divides the economics profession today, and the divisions were evident at the Nobel Week events in Stockholm last month.
There were related questions, too: Does it make sense to suppose that economic decisions or market prices can be modeled in the precise way that mathematical economists have traditionally favored? Or is there some emotionality in all of us that defies such modeling?
This debate isn’t merely academic. It’s fundamental, and the answers affect nearly everyone. Are speculative market booms and busts — like those that led to the recent financial crisis — examples of rational human reactions to new information, or of crazy fads and bubbles? Is it reasonable to base theories of economic behavior, which surely has a rational, calculating component, on the assumption that only that component matters?
The three of us who shared the Nobel in economic science — Eugene F. Fama, Lars Peter Hansen and I — gave very different answers in our Nobel lectures. ...

'Paul Krugman & the Nature of Economics'

Chris Dillow:

Paul Krugman & the nature of economics: Paul Krugman is being accused of hypocrisy for calling for an extension of unemployment benefits when one of his textbooks says "Generous unemployment benefits can increase both structural and frictional unemployment." I think he can be rescued from this charge, if we recognize that economics is not like (some conceptions of) the natural sciences, in that its theories are not universally applicable but rather of only local and temporal validity.
What I mean is that "textbook Krugman" is right in normal times when aggregate demand is highish. In such circumstances, giving people an incentive to find work through lower unemployment benefits can reduce frictional unemployment (the coexistence of vacancies and joblessness) and so increase output and reduce inflation.
But these might well not be normal times. It could well be be that demand for labour is unusually weak; low wage inflation and employment-population ratios suggest as much. In this world, the priority is not so much to reduce frictional unemployment as to reduce "Keynesian unemployment". And increased unemployment benefits - insofar as they are a fiscal expansion - might do this. When "columnist Krugman" says that "enhanced [unemployment insurance] actually creates jobs when the economy is depressed", the emphasis must be upon the last five words.
Indeed, incentivizing people to find work when it is not (so much) available might be worse than pointless. Cutting unemployment benefits might incentivize people to turn to crime rather than legitimate work.
So, it could be that "columnist Krugman" and "textbook Krugman" are both right, but they are describing different states of the world - and different facts require different models...

Friday, January 03, 2014

'Economics as Craft'

Having argued the same thing several times, including recently, I agree with Dani Rodrik:

Economics as craft: I have an article in the IAS’s quarterly publication, the Institute Letter, on the state of Economics.  Despite the evident role of the economics profession in the recent crisis and my critical views on conventional wisdom in globalization and development, my take on the discipline is rather positive.
Where we frequently go wrong as economists is to look for the “one right model” – the single story that provides the best universal explanation. Yet, the strength of economics is that it provides a panoply of context-specific models. The right explanation depends on the situation we find ourselves in. Sometimes the Keynesians are right, sometimes the classicals. Markets work sometimes along the lines of competitive models and sometimes along monopolistic models. The craft of economics consists on being able to diagnose which of the models apply best in a given historical and geographical context. ...

Thursday, January 02, 2014

The Tale of Joe of Metrika

This is semi-related to this post from earlier today. The tale of Joe of Metrika:

... Metrika began as a small village - little more than a coach-stop and a mandatory tavern at a junction in the highway running from the ancient data mines in the South, to the great city of Enlightenment, far to the North. In Metrika, the transporters of data of all types would pause overnight on their long journey; seek refreshment at the tavern; and swap tales of their experiences on the road.

To be fair, the data transporters were more than just humble freight carriers. The raw material that they took from the data mines was largely unprocessed. The vast mountains of raw numbers usually contained valuable gems and nuggets of truth, but typically these were buried from sight. The data transporters used the insights that they gained from their raucous, beer-fired discussions and arguments (known locally as "seminars") with the Metrika yokels locals at the tavern to help them to sift through the data and extract the valuable jewels. With their loads considerably lightened, these "data-miners" then continued on their journey to the City of Enlightenment in a much improved frame of mind, hangovers nothwithstanding!

Over time, the town of Metrika prospered and grew as the talents of its citizens were increasingly recognized and valued by those in the surrounding districts, and by the data miners transporters.

Young Joe grew up happily, supported by his family of econometricians, and he soon developed the skills that were expected of his societal class. He honed his computing skills; developed a good nose for "dodgy" data; and studiously broadened and deepened his understanding of the various tools wielded by the artisans in the neighbouring town of Statsbourg.

In short, he was a model child!

But - he was torn! By the time that he reached the tender age of thirteen, he felt the need to make an important, life-determining, decision.

Should he align his talents with the burly crew who frequented the gym near his home - the macroeconometricians - or should he throw in his lot with the physically challenged bunch of empirical economists known locally as the microeconometricians? ...

Full story here.

'Tribalism, Biology, and Macroeconomics'

Paul Krugman:

Tribalism, Biology, and Macroeconomics: ...Pew has a new report about changing views on evolution. The big takeaway is that a plurality of self-identified Republicans now believe that no evolution whatsoever has taken place since the day of creation... The move is big: an 11-point decline since 2009. ... Democrats are slightly more likely to believe in evolution than they were four years ago.
So what happened after 2009 that might be driving Republican views? The answer is obvious, of course: the election of a Democratic president
Wait — is the theory of evolution somehow related to Obama administration policy? Not that I’m aware of... The point, instead, is that Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists. For some time now it has been impossible to be a good Republicans while believing in the reality of climate change; now it’s impossible to be a good Republican while believing in evolution.
And of course the same thing is happening in economics. As recently as 2004, the Economic Report of the President (pdf) of a Republican administration could espouse a strongly Keynesian view..., the report — presumably written by Greg Mankiw — used the “s-word”, calling for “short-term stimulus”.
Given that intellectual framework, the reemergence of a 30s-type economic situation ... should have made many Republicans more Keynesian than before. Instead, at just the moment that demand-side economics became obviously critical, we saw Republicans — the rank and file, of course, but economists as well — declare their fealty to various forms of supply-side economics, whether Austrian or Lafferian or both. ...
And look, this has to be about tribalism. All the evidence ... has pointed in a Keynesian direction; but Keynes-hatred (and hatred of other economists whose names begin with K) has become a tribal marker, part of what you have to say to be a good Republican.

Before the Great Recession, macroeconomists seemed to be converging to a single intellectual framework. In Olivier Blanchard's famous words:

after the explosion (in both the positive and negative meaning of the word) of the field in the 1970s, there has been enormous progress and substantial convergence. For a while - too long a while - the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good.

The recession revealed that the "extremism, herding, and fashion" is much worse than many of us realized, and the rifts that have reemerged and are as strong as ever. What it didn't reveal is how to move beyond this problem. I thought evidence would matter more than it does, but somehow we seem to have lost the ability to distinguish between competing theoretical structures based upon econometric evidence (if we ever had it). The state of macro is not good, and the path to improvement is hard to see, but it must involve a shared agreement over the evidence based means through which the profession on both sides of these debates can embrace or reject particular theoretrical models.

Thursday, December 19, 2013

'Is Finance Guided by Good Science or Convincing Magic?'

Tim Johnson:

Is finance guided by good science or convincing magic?: Noah Smith posted a piece on "Freshwater vs. Saltwater" divides macro, but not finance.  As a mathematician the substance of the piece wasn't that interesting (a nice explanation is here) but there was a comment from Stephen Williamson that really caught my attention

Another thought. You've persisted with the view that when the science is crappy - whether because of bad data or some kind of bad equilibrium I guess - there is disagreement. ... What's at stake in finance? The flow of resources to finance people comes from Wall Street. All the Wall Street people care about is making money, so good science gets rewarded. I'm not saying that macroeconomic science is bad, only that there are plenty of opportunities for policymakers to be sold schlock macro pseudo-science.

 What I aim to do in this post is offer an explanation for the 'divide' in economics from the perspective of moral philosophy and on this basis argue that finance is not guided by science but by magic. ...

'More on the Illusion of Superiority'

Simon Wren-Lewis:

More on the illusion of superiority: For economists, and those interested in methodology Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.
My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.
But what about the Lucas critique? ...[continue]...

[In Maui, will post as I can...]

Tuesday, December 17, 2013

'Four Missing Ingredients in Macroeconomic Models'

Antonio Fatas:

Four missing ingredients in macroeconomic models: It is refreshing to see top academics questioning some of the assumptions that economists have been using in their models. Krugman, Brad DeLong and many others are opening a methodological debate about what constitute an acceptable economic model and how to validate its predictions. The role of micro foundations, the existence of a natural state towards the economy gravitates,... are all very interesting debates that tend to be ignored (or assumed away) in academic research.

I would like to go further and add a few items to their list... In random order:

1. The business cycle is not symmetric. ... Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Southern Europe is very close to its actual.

2. ...most academic research is produced around models where small and frequent shocks drive economic fluctuations, as opposed to large and infrequent events. The disconnect comes probably from the fact that it is so much easier to write models with small and frequent shocks than having to define a (stochastic?) process for large events. It gets even worse if one thinks that recessions are caused by the dynamics generated during expansions. Most economic models rely on unexpected events to generate crisis, and not on the internal dynamics that precede the crisis.

[A little bit of self-promotion: my paper with Ilian Mihov on the shape and length of recoveries presents some evidence in favor of these two hypothesis.]

3. There has to be more than price rigidity. ...

4. The notion that co-ordination across economic agents matters to explain the dynamics of business cycles receives very limited attention in academic research. ...

I am aware that they are plenty of papers that deal with these four issues, some of them published in the best academic journals. But most of these papers are not mainstream. Most economists are sympathetic to these assumption but avoid writing papers using them because they are afraid they will be told that their assumptions are ad-hoc and that the model does not have enough micro foundations (for the best criticism of this argument, read the latest post of Simon Wren-Lewis). Time for a change?

On the plucking model, see here and here.

Friday, December 13, 2013

Sticky Ideology

Paul Krugman:

Rudi Dornbusch and the Salvation of International Macroeconomics (Wonkish): ...Ken Rogoff had a very good paper on all this, in which he also says something about the state of affairs within the economics profession at the time:

The Chicago-Minnesota School maintained that sticky prices were nonsense and continued to advance this view for at least another fifteen years. It was the dominant view in academic macroeconomics. Certainly, there was a long period in which the assumption of sticky prices was a recipe for instant rejection at many leading journals. Despite the religious conviction among macroeconomic theorists that prices cannot be sticky, the Dornbusch model remained compelling to most practical international macroeconomists. This divergence of views led to a long rift between macroeconomics and much of mainstream international finance …

There are more than a few of us in my generation of international economists who still bear the scars of not being able to publish sticky-price papers during the years of new neoclassical repression.

Notice that this isn’t the evil Krugman talking; it’s the respectable Rogoff. Yet he too is in effect describing neoclassical macro as a sort of cult, actively suppressing alternative approaches. What he gets wrong is in the part I’ve elided with my “…”, in which he asserts that this is all behind us. As we saw when crisis struck, Chicago/Minnesota had in fact learned nothing and was pretty much unaware of the whole New Keynesian enterprise — and from what I hear about macro hiring, the suppression of ideas at odds with the cult remains in full force. ...

Saturday, December 07, 2013

Econometrics and 'Big Data'

I tweeted this link, and it's getting far, far more retweets than I would have expected, so I thought I'd note it here:

Econometrics and "Big Data", by Dave Giles: In this age of "big data" there's a whole new language that econometricians need to learn. ... What do you know about such things as:

  • Decision trees 
  • Support vector machines
  • Neural nets 
  • Deep learning
  • Classification and regression trees
  • Random forests
  • Penalized regression (e.g., the lasso, lars, and elastic nets)
  • Boosting
  • Bagging
  • Spike and slab regression?

Probably not enough!

If you want some motivation to rectify things, a recent paper by Hal Varian ... titled, "Big Data: New Tricks for Econometrics" ... provides an extremely readable introduction to several of these topics.

He also offers a valuable piece of advice:

"I believe that these methods have a lot to offer and should be more widely known and used by economists. In fact, my standard advice to graduate students these days is 'go to the computer science department and take a class in machine learning'."

Wednesday, December 04, 2013

'Microfoundations': I Do Not Think That Word Means What You Think It Means

'Microfoundations': I Do Not Think That Word Means What You Think It Means

Brad DeLong responds to my column on macroeconomic models:

“Microfoundations”: I Do Not Think That Word Means What You Think It Means

The basic point is this:

...New Keynesian models with more or less arbitrary micro foundations are useful for rebutting claims that all is for the best macro economically in this best of all possible macroeconomic worlds. But models with micro foundations are not of use in understanding the real economy unless you have the micro foundations right. And if you have the micro foundations wrong, all you have done is impose restrictions on yourself that prevent you from accurately fitting reality.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? ...
After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Tuesday, December 03, 2013

One Model to Rule Them All?

Latest column:

Is There One Model to Rule Them All?: The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether? ...
The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?  - See more at: http://www.thefiscaltimes.com/Columns/2013/12/03/There-One-Economic-Model-Rule-Them-All#sthash.WPTndtm4.dpuf

Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:

Evans

This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Wednesday, November 27, 2013

'Who Made Economics?'

Daniel Little:

Who made economics?, by Daniel Little: The discipline of economics has a high level of intellectual status, even hegemony, in today’s social sciences — especially in universities in the United States. It also has a very specific set of defining models and theories that distinguish between “good” and “bad” economics. This situation suggests two topics for research: how did political economy and its successors ascend to this position of prestige in the social sciences? And how did this particular mix of techniques, problems, mathematical methods, and exemplar theoretical papers come to define the mainstream discipline? How did this governing disciplinary matrix develop and win the field?

One of the most interesting people taking on questions like these is Marion Fourcade. Her Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s was discussed in an earlier post (link). An early place where she expressed her views on these topics is in her 2001 article, “Politics, Institutional Structures, and the Rise of Economics: A Comparative Study” (link). There she describes the evolution of economics in these terms:

Since the middle of the nineteenth century, the study of the economy has evolved from a loose discursive "field," with no clear and identifiable boundaries, into a fully "professionalized" enterprise, relying on both a coherent and formalized framework, and extensive practical claims in administrative, business, and mass media institutions. (397)

And she argues that this process was contingent, path-dependent, and only loosely guided by a compass of “better” science:

Overall,contrary to the frequent assumption that economics is a universal and universally shared science, there seems to be considerable cross-national variation in (1) the and nature of the institutionalization of an economic knowledge field, (2) the forms of professional action of economists, and (3) intellectual traditions in the of economics. (398)

Fourcade approaches this subject as a sociologist; so she wants to understand the institutional and structural factors that led to the shaping and stabilization of this field of knowledge.

Understanding the relationship between the institutional and intellectual aspects of knowledge production requires, first and foremost, a historical analysis of the conditions under which a coherent domain of discourse and practice was established in the first place. (398)

A key question in this article (and in Economists and Societies) is how the differences that exist between the disciplines of economics in France, Germany, Great Britain, and the US came to be. The core of the answer that she gives rests on her analysis of the relationships that existed between practitioners and the state: "A comparison of the four cases under investigation suggests that the entrenchment of the economics profession was profoundly shaped by the relationship of its practitioners to the larger political institutions and culture of their country" (432). So differences between economics in, say, France and the United States, are to be traced back to the different ways in which academic practitioners of economic analysis and policy recommendations were situated with regard to the institutions of the state.

It is possible to treat the history of ideas internally ("systems of ideas are driven by rational discussion of their implications") and externally ("systems of ideas are driven by the social needs and institutional arrangements of a certain time"). The best sociology of knowledge avoids this dichotomy, allowing for both the idea that a field of thought advances in part through the scientific and conceptual debates that occur within it and the idea that historically specific structures and institutions have important effects on the shape and direction of the development of a field. Fourcade avoids the dichotomy by treating seriously the economic reasoning that took place at a time and place, while also searching out the institutional and structural factors that favored this approach or that in a particular national setting.

This is sociology of knowledge done at a high level of resolution. Fourcade wants to identify the mechanisms through which "societal institutions" influence the production of knowledge in the four country contexts that she studies (Germany, Great Britain, France, and the US). She does not suggest that economics lacks scientific content or that economic debates do not have a rational structure of argument. But she does argue that the configuration of the field itself was not the product of rational scientific advance and discovery, but instead was shaped by the institutions of the university and the exigencies of the societies within which it developed.

Fourcade's own work suggests a different kind of puzzle -- this time in the development of the field of the sociology of knowledge. Fourcade's topic seems to be one that is tailor-made for treatment within the terms of Bourdieu's theory of a field. And in fact some of Fourcade's analysis of the institutional factors that influenced the success or failure of academic economists in Britain, Germany, or the US fits Bourdieu's theory very well. Bourdieu's book Homo Academicus appeared in 1984 in French and 1988 in English. But Fourcade does not make use of Bourdieu's ideas at all in the 2001 article -- some 17 years after Bourdieu's ideas were published. Reference to elements of Bourdieu's approach appears only in the 2009 book. There she writes:

Bourdieu found that the social sciences occupy a very peculiar position among all scientific fields in that external factors play an especially important part in determining these fields' internal stratification and structure of authority.... Within each disciplinary field, the subjective (i.e., agentic) and objective (i.e., structural) positions of individuals are "homologous": in other words, the polar opposition between "economic" and "cultural" capital is replicated at the field's level, and mirrors the orthodoxy/heterodoxy divide. (23)

So why was Bourdieu not considered in the 2001 article? This shift in orientation may be simply a feature of the author's own intellectual development. But it may also be diagnostic of the rise of Bourdieu's influence on the sociology of knowledge in the 90's and 00's. It would be interesting to see a graph of the frequency of references to the book since 1984.

(Gabriel Abend's treatment of the differences that exist between the paradigms of sociology in the United States and Mexico is of interest here as well; link.)

Tuesday, November 05, 2013

Do People Have Rational Expectations?

New column:

Do People Have Rational Expectations?, by Mark Thoma

Not always, and economic models need to take this into account.

Wednesday, October 30, 2013

'Economics, Good and Bad'

Chris Dillow:

Economics, good and bad: Attacks on mainstream economics such as this by Aditya Chakrabortty leave me hopelessly conflicted. ...[explains why he's conflicted]...
The division that matters is not so much between heterodox and mainstream economics, but between good economics and bad. I'll give just two examples of what I mean.
First, good economics tests itself against the facts. What makes Mankiw's defence of the 1% so risible is that it ducks out of the empirical question of whether neoclassical explanations for rising inequality are actually empirically valid. Just because something could be consistent with a theory does not mean that it is.
Secondly, good economics asks: which model (or better, which mechanism or which theory) fits the problem at hand? For example, if your question is "should I invest in this high-charging actively managed fund?" you must at least take the efficient market hypothesis as your starting point. But if you're asking "are markets prone to bubbles?" you might not. As Noah says, the EMH is a great guide for investors, but not so much for policy-makers.
It's in this sense that I don't like pieces like Aditya's. Ordinary everyday economics - of the sort that's useful for real people - isn't about bigthink and meta-theorizing, but about careful consideration of the facts.