Category Archive for: Methodology [Return to Main]

Thursday, March 27, 2014

'The Misuse of Theoretical Models in Finance and Economics'

 Stanford University's Paul Pfleiderer:

Chameleons: The Misuse of Theoretical Models in Finance and Economics, by Paul Pfleiderer, March 2014: Abstract In this essay I discuss how theoretical models in finance and economics are used in ways that make them “chameleons” and how chameleons devalue the intellectual currency and muddy policy debates. A model becomes a chameleon when it is built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy. I discuss how chameleons are created and nurtured by the mistaken notion that one should not judge a model by its assumptions, by the unfounded argument that models should have equal standing until definitive empirical tests are conducted, and by misplaced appeals to “as-if” arguments, mathematical elegance, subtlety, references to assumptions that are “standard in the literature,” and the need for tractability.

Sunday, March 23, 2014

On Greg Mankiw's 'Do No Harm'

A rebuttal to Greg Mankiw's claim that the government should not interfere in voluntary exchanges. This is from Rakesh Vohra at Theory of the Leisure Class:

Do No Harm & Minimum Wage: In the March 23rd edition of the NY Times Mankiw proposes a 'do no harm' test for policy makers:

…when people have voluntarily agreed upon an economic arrangement to their mutual benefit, that arrangement should be respected.

There is a qualifier for negative externalities, and he goes on to say:

As a result, when a policy is complex , hard to evaluate and disruptive of private transactions, there is good reason to be skeptical of it.

Minimum wage legislation is offered as an example of a policy that fails the do no harm test. ...

There is an immediate 'heart strings' argument against the test, because indentured servitude passes the 'do no harm' test. ... I want to focus instead on two other aspects of the 'do no harm' principle contained in the words 'voluntarily'and 'benefit'. What is voluntary and benefit compared to what? ...

When parties negotiate to their mutual benefit, it is to their benefit relative to the status quo. When the status quo presents one agent an outside option that is untenable, say starvation, is bargaining voluntary, even if the other agent is not directly threatening starvation? The difficulty with the `do no harm’ principle in policy matters is the assumption that the status quo does less harm than a change in it would. This is not clear to me at all. Let me illustrate this...

Assuming a perfectly competitive market, imposing a minimum wage constraint above the equilibrium wage would reduce total welfare. What if the labor market were not perfectly competitive? In particular, suppose it was a monopsony employer constrained to offer the same wage to everyone employed. Then, imposing a minimum wage above the monopsonist’s optimal wage would increase total welfare.

[There is also an example based upon differences in patience that I left out.]

Friday, March 21, 2014

'Labor Markets Don't Clear: Let's Stop Pretending They Do'

Roger farmer:

Labor Markets Don't Clear: Let's Stop Pretending They Do: Beginning with the work of Robert Lucas and Leonard Rapping in 1969, macroeconomists have modeled the labor market as if the wage always adjusts to equate the demand and supply of labor.

I don't think that's a very good approach. It's time to drop the assumption that the demand equals the supply of labor.
Why would you want to delete the labor market clearing equation from an otherwise standard model? Because setting the demand equal to the supply of labor is a terrible way of understanding business cycles. ...
Why is this a big deal? Because 90% of the macro seminars I attend, at conferences and universities around the world, still assume that the labor market is an auction where anyone can work as many hours as they want at the going wage. Why do we let our students keep doing this?

'The Counter-Factual & the Fed’s QE'

I tried to make this point in a recent column (it was about fiscal rather than monetary policy, but the same point applies), but I think Barry Ritholtz makes the point better and more succinctly:

Understanding Why You Think QE Didn't Work, by Barry Ritholtz: Maybe you have heard a line that goes something like this: The weak recovery is proof that the Federal Reserve’s program of asset purchases, otherwise known as quantitative easement, doesn't work.
If you were the one saying those words, you don't understand the counterfactual. ...
This flawed analytical paradigm has many manifestations, and not just in the investing world. They all rely on the same equation: If you do X, and there is no measurable change, X is therefore ineffective.
The problem with this “non-result result” is what would have occurred otherwise. Might “no change” be an improvement from what otherwise would have happened? No change, last time I checked, is better than a free-fall.
If you are testing a new medication to reduce tumors, you want to see what happened to the group that didn't get the test therapy. Maybe this control group experienced rapid tumor growth. Hence, a result where there is no increase in tumor mass in the group receiving the therapy would be considered a very positive outcome.
We run into the same issue with QE. ... Without that control group, we simply don't know. ...

Friday, February 21, 2014

'What Game Theory Means for Economists'

At MoneyWatch:

Explainer: What "game theory" means for economists, by Mark Thoma: Coming upon the term "game theory" this week, your first thought would likely be about the Winter Olympics in Sochi. But here we're going to discuss how game theory applies in economics, where it's widely used in topics far removed from the ski slopes and ice rinks where elite athletes compete. ...

Saturday, February 15, 2014

'Microfoundations and Mephistopheles'

Paul Krugman continues the discussion on "whether New Keynesians made a Faustian bargain":

Microfoundations and Mephistopheles (Wonkish): Simon Wren-Lewis asks whether New Keynesians made a Faustian bargain by accepting the New Classical dictat that models must be grounded in intertemporal optimization — whether they purchased academic respectability at the expense of losing their ability to grapple usefully with the real world.
Wren-Lewis’s answer is no, because New Keynesians were only doing what they would have wanted to do even if there hadn’t been a de facto blockade of the journals against anything without rational-actor microfoundations. He has a point: long before anyone imagined doing anything like real business cycle theory, there had been a steady trend in macro toward grounding ideas in more or less rational behavior. The life-cycle model of consumption, for example, was clearly a step away from the Keynesian ad hoc consumption function toward modeling consumption choices as the result of rational, forward-looking behavior.
But I think we need to be careful about defining what, exactly, the bargain was. I would agree that being willing to use models with hyperrational, forward-looking agents was a natural step even for Keynesians. The Faustian bargain, however, was the willingness to accept the proposition that only models that were microfounded in that particular sense would be considered acceptable. ...
So it was the acceptance of the unique virtue of one concept of microfoundations that constituted the Faustian bargain. And one thing you should always know, when making deals with the devil, is that the devil cheats. New Keynesians thought that they had won some acceptance from the freshwater guys by adopting their methods; but when push came to shove, it turned out that there wasn’t any real dialogue, and never had been.

My view is that micro-founded models are useful for answering some questions, but other types of models are best for other questions. There is no one model that is best in every situation, the model that should be used depends upon the question being asked. I've made this point many times, most recently in this column, an also in this post from September 2011 that repeats arguments from September 2009:

New Old Keynesians?: Tyler Cowen uses the term "New Old Keynesian" to describe "Paul Krugman, Brad DeLong, Justin Wolfers and others." I don't know if I am part of the "and others" or not, but in any case I resist a being assigned a particular label.

Why? Because I believe the model we use depends upon the questions we ask (this is a point emphasized by Peter Diamond at the recent Nobel Meetings in Lindau, Germany, and echoed by other speakers who followed him). If I want to know how monetary authorities should respond to relatively mild shocks in the presence of price rigidities, the standard New Keynesian model is a good choice. But if I want to understand the implications of a breakdown in financial intermediation and the possible policy responses to it, those models aren't very informative. They weren't built to answer this question (some variations do get at this, but not in a fully satisfactory way).

Here's a discussion of this point from a post written two years ago:

There is no grand, unifying theoretical structure in economics. We do not have one model that rules them all. Instead, what we have are models that are good at answering some questions - the ones they were built to answer - and not so good at answering others.

If I want to think about inflation in the very long run, the classical model and the quantity theory is a very good guide. But the model is not very good at looking at the short-run. For questions about how output and other variables move over the business cycle and for advice on what to do about it, I find the Keynesian model in its modern form (i.e. the New Keynesian model) to be much more informative than other models that are presently available.

But the New Keynesian model has its limits. It was built to capture "ordinary" business cycles driven by pricesluggishness of the sort that can be captured by the Calvo model model of price rigidity. The standard versions of this model do not explain how financial collapse of the type we just witnessed come about, hence they have little to say about what to do about them (which makes me suspicious of the results touted by people using multipliers derived from DSGE models based upon ordinary price rigidities). For these types of disturbances, we need some other type of model, but it is not clear what model is needed. There is no generally accepted model of financial catastrophe that captures the variety of financial market failures we have seen in the past.

But what model do we use? Do we go back to old Keynes, to the 1978 model that Robert Gordon likes, do we take some of the variations of the New Keynesian model that include effects such as financial accelerators and try to enhance those, is that the right direction to proceed? Are the Austrians right? Do we focus on Minsky? Or do we need a model that we haven't discovered yet?

We don't know, and until we do, I will continue to use the model I think gives the best answer to the question being asked. The reason that many of us looked backward for a model to help us understand the present crisis is that none of the current models were capable of explaining what we were going through. The models were largely constructed to analyze policy is the context of a Great Moderation, i.e. within a fairly stable environment. They had little to say about financial meltdown. My first reaction was to ask if the New Keynesian model had any derivative forms that would allow us to gain insight into the crisis and what to do about it and, while there were some attempts in that direction, the work was somewhat isolated and had not gone through the type of thorough analysis needed to develop robust policy prescriptions. There was something to learn from these models, but they really weren't up to the task of delivering specific answers. That may come, but we aren't there yet.

So, if nothing in the present is adequate, you begin to look to the past. The Keynesian model was constructed to look at exactly the kinds of questions we needed to answer, and as long as you are aware of the limitations of this framework - the ones that modern theory has discovered - it does provide you with a means of thinking about how economies operate when they are running at less than full employment. This model had already worried about fiscal policy at the zero interest rate bound, it had already thought about Says law, the paradox of thrift, monetary versus fiscal policy, changing interest and investment elasticities in a  crisis, etc., etc., etc. We were in the middle of a crisis and didn't have time to wait for new theory to be developed, we needed answers, answers that the elegant models that had been constructed over the last few decades simply could not provide. The Keyneisan model did provide answers. We knew the answers had limitations - we were aware of the theoretical developments in modern macro and what they implied about the old Keynesian model - but it also provided guidance at a time when guidance was needed, and it did so within a theoretical structure that was built to be useful at times like we were facing. I wish we had better answers, but we didn't, so we did the best we could. And the best we could involved at least asking what the Keynesian model would tell us, and then asking if that advice has any relevance today. Sometimes if didn't, but that was no reason to ignore the answers when it did.

[So, depending on the question being asked, I am a New Keynesian, an Old Keynesian, a Classicist, etc.]

Friday, February 14, 2014

'Are New Keynesian DSGE Models a Faustian Bargain?'

Simon Wren-Lewis:

 Are New Keynesian DSGE models a Faustian bargain?: Some write as if this were true. The story is that after the New Classical counter revolution, Keynesian ideas could only be reintroduced into the academic mainstream by accepting a whole load of New Classical macro within DSGE models. This has turned out to be a Faustian bargain, because it has crippled the ability of New Keynesians to understand subsequent real world events. Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build? ...

Saturday, January 25, 2014

'Is Macro Giving Economics a Bad Rap?'

Chris House defends macro:

Is Macro Giving Economics a Bad Rap?: Noah Smith really has it in for macroeconomists. He has recently written an article in The Week in which he claims that macro is one of the weaker fields in economics...

I think the opposite is true. Macro is one of the stronger fields, if not the strongest ... Macro is quite productive and overall quite healthy. There are several distinguishing features of macroeconomics which set it apart from many other areas in economics. In my assessment, along most of these dimensions, macro comes out looking quite good.

First, macroeconomists are constantly comparing models to data. ... Holding theories up to the data is a scary and humiliating step but it is a necessary step if economic science is to make progress. Judged on this basis, macro is to be commended...

Second, in macroeconomics, there is a constant push to quantify theories. That is, there is always an effort to attach meaningful parameter values to the models. You can have any theory you want but at the end of the day, you are interested not only in idea itself, but also in the magnitude of the effects. This is again one of the ways in which macro is quite unlike other fields.

Third, when the models fail (and they always fail eventually), the response of macroeconomists isn’t to simply abandon the model, but rather they highlight the nature of the failure.  ...

Lastly, unlike many other fields, macroeconomists need to have a wide array of skills and familiarity with many sub-fields of economics. As a group, macroeconomists have knowledge of a wide range of analytical techniques, probably better knowledge of history, and greater familiarity and appreciation of economic institutions than the average economist.

In his opening remarks, Noah concedes that macro is “the glamor division of econ”. He’s right. What he doesn’t tell you is that the glamour division is actually doing pretty well. ...

Sunday, January 19, 2014

'Rational Agents: Irrational Markets'

Roger Farmer:

Rational Agents: Irrational Markets: Bob Shiller wrote an interesting piece in today's NY Times on the irrationality of human action. Shiller argues that the economist's conception of human beings as rational is hard to square with the behavior of asset markets.
Although I agree with Shiller, that human action is inadequately captured by the assumptions that most economists make about behavior, I am not convinced that we need to go much beyond the rationality assumption, to understand what causes financial crises or why they are so devastatingly painful for large numbers of people. The assumption that agents maximize utility can get us a very very long way. ...
In my own work, I have shown that the labor market can go very badly wrong even when everybody is rational.  My coauthors and I showed in a recent paper, that the same idea holds in financial markets. Even when individuals are assumed to be rational; the financial markets may function very badly. ...
Miles Kimball and I have both been arguing that stock market fluctuations are inefficient and we both think that government should act to stabilize the asset markets. Miles' position is much closer to that of Bob Shiller; he thinks that agents are not always rational in the sense of Edgeworth. Miles and Bob may well be right. But in my view, the argument for stabilizing asset markets is much stronger. Even if we accept that agents are rational, it does not follow that swings in asset prices are Pareto efficient. But whether the motive arises from irrational people, or irrational markets; Miles and I agree: We can, and should, design an institution that takes advantage of the government's ability to trade on behalf of the unborn. More on that in a future post. ...

Saturday, January 18, 2014

'The Rationality Debate, Simmering in Stockholm'

Robert Shiller:

The Rationality Debate, Simmering in Stockholm, by Robert Shiller, Commentary, NY Times: Are people really rational in their economic decision making? That question divides the economics profession today, and the divisions were evident at the Nobel Week events in Stockholm last month.
There were related questions, too: Does it make sense to suppose that economic decisions or market prices can be modeled in the precise way that mathematical economists have traditionally favored? Or is there some emotionality in all of us that defies such modeling?
This debate isn’t merely academic. It’s fundamental, and the answers affect nearly everyone. Are speculative market booms and busts — like those that led to the recent financial crisis — examples of rational human reactions to new information, or of crazy fads and bubbles? Is it reasonable to base theories of economic behavior, which surely has a rational, calculating component, on the assumption that only that component matters?
The three of us who shared the Nobel in economic science — Eugene F. Fama, Lars Peter Hansen and I — gave very different answers in our Nobel lectures. ...

'Paul Krugman & the Nature of Economics'

Chris Dillow:

Paul Krugman & the nature of economics: Paul Krugman is being accused of hypocrisy for calling for an extension of unemployment benefits when one of his textbooks says "Generous unemployment benefits can increase both structural and frictional unemployment." I think he can be rescued from this charge, if we recognize that economics is not like (some conceptions of) the natural sciences, in that its theories are not universally applicable but rather of only local and temporal validity.
What I mean is that "textbook Krugman" is right in normal times when aggregate demand is highish. In such circumstances, giving people an incentive to find work through lower unemployment benefits can reduce frictional unemployment (the coexistence of vacancies and joblessness) and so increase output and reduce inflation.
But these might well not be normal times. It could well be be that demand for labour is unusually weak; low wage inflation and employment-population ratios suggest as much. In this world, the priority is not so much to reduce frictional unemployment as to reduce "Keynesian unemployment". And increased unemployment benefits - insofar as they are a fiscal expansion - might do this. When "columnist Krugman" says that "enhanced [unemployment insurance] actually creates jobs when the economy is depressed", the emphasis must be upon the last five words.
Indeed, incentivizing people to find work when it is not (so much) available might be worse than pointless. Cutting unemployment benefits might incentivize people to turn to crime rather than legitimate work.
So, it could be that "columnist Krugman" and "textbook Krugman" are both right, but they are describing different states of the world - and different facts require different models...

Friday, January 03, 2014

'Economics as Craft'

Having argued the same thing several times, including recently, I agree with Dani Rodrik:

Economics as craft: I have an article in the IAS’s quarterly publication, the Institute Letter, on the state of Economics.  Despite the evident role of the economics profession in the recent crisis and my critical views on conventional wisdom in globalization and development, my take on the discipline is rather positive.
Where we frequently go wrong as economists is to look for the “one right model” – the single story that provides the best universal explanation. Yet, the strength of economics is that it provides a panoply of context-specific models. The right explanation depends on the situation we find ourselves in. Sometimes the Keynesians are right, sometimes the classicals. Markets work sometimes along the lines of competitive models and sometimes along monopolistic models. The craft of economics consists on being able to diagnose which of the models apply best in a given historical and geographical context. ...

Thursday, January 02, 2014

The Tale of Joe of Metrika

This is semi-related to this post from earlier today. The tale of Joe of Metrika:

... Metrika began as a small village - little more than a coach-stop and a mandatory tavern at a junction in the highway running from the ancient data mines in the South, to the great city of Enlightenment, far to the North. In Metrika, the transporters of data of all types would pause overnight on their long journey; seek refreshment at the tavern; and swap tales of their experiences on the road.

To be fair, the data transporters were more than just humble freight carriers. The raw material that they took from the data mines was largely unprocessed. The vast mountains of raw numbers usually contained valuable gems and nuggets of truth, but typically these were buried from sight. The data transporters used the insights that they gained from their raucous, beer-fired discussions and arguments (known locally as "seminars") with the Metrika yokels locals at the tavern to help them to sift through the data and extract the valuable jewels. With their loads considerably lightened, these "data-miners" then continued on their journey to the City of Enlightenment in a much improved frame of mind, hangovers nothwithstanding!

Over time, the town of Metrika prospered and grew as the talents of its citizens were increasingly recognized and valued by those in the surrounding districts, and by the data miners transporters.

Young Joe grew up happily, supported by his family of econometricians, and he soon developed the skills that were expected of his societal class. He honed his computing skills; developed a good nose for "dodgy" data; and studiously broadened and deepened his understanding of the various tools wielded by the artisans in the neighbouring town of Statsbourg.

In short, he was a model child!

But - he was torn! By the time that he reached the tender age of thirteen, he felt the need to make an important, life-determining, decision.

Should he align his talents with the burly crew who frequented the gym near his home - the macroeconometricians - or should he throw in his lot with the physically challenged bunch of empirical economists known locally as the microeconometricians? ...

Full story here.

'Tribalism, Biology, and Macroeconomics'

Paul Krugman:

Tribalism, Biology, and Macroeconomics: ...Pew has a new report about changing views on evolution. The big takeaway is that a plurality of self-identified Republicans now believe that no evolution whatsoever has taken place since the day of creation... The move is big: an 11-point decline since 2009. ... Democrats are slightly more likely to believe in evolution than they were four years ago.
So what happened after 2009 that might be driving Republican views? The answer is obvious, of course: the election of a Democratic president
Wait — is the theory of evolution somehow related to Obama administration policy? Not that I’m aware of... The point, instead, is that Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists. For some time now it has been impossible to be a good Republicans while believing in the reality of climate change; now it’s impossible to be a good Republican while believing in evolution.
And of course the same thing is happening in economics. As recently as 2004, the Economic Report of the President (pdf) of a Republican administration could espouse a strongly Keynesian view..., the report — presumably written by Greg Mankiw — used the “s-word”, calling for “short-term stimulus”.
Given that intellectual framework, the reemergence of a 30s-type economic situation ... should have made many Republicans more Keynesian than before. Instead, at just the moment that demand-side economics became obviously critical, we saw Republicans — the rank and file, of course, but economists as well — declare their fealty to various forms of supply-side economics, whether Austrian or Lafferian or both. ...
And look, this has to be about tribalism. All the evidence ... has pointed in a Keynesian direction; but Keynes-hatred (and hatred of other economists whose names begin with K) has become a tribal marker, part of what you have to say to be a good Republican.

Before the Great Recession, macroeconomists seemed to be converging to a single intellectual framework. In Olivier Blanchard's famous words:

after the explosion (in both the positive and negative meaning of the word) of the field in the 1970s, there has been enormous progress and substantial convergence. For a while - too long a while - the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good.

The recession revealed that the "extremism, herding, and fashion" is much worse than many of us realized, and the rifts that have reemerged and are as strong as ever. What it didn't reveal is how to move beyond this problem. I thought evidence would matter more than it does, but somehow we seem to have lost the ability to distinguish between competing theoretical structures based upon econometric evidence (if we ever had it). The state of macro is not good, and the path to improvement is hard to see, but it must involve a shared agreement over the evidence based means through which the profession on both sides of these debates can embrace or reject particular theoretrical models.

Thursday, December 19, 2013

'Is Finance Guided by Good Science or Convincing Magic?'

Tim Johnson:

Is finance guided by good science or convincing magic?: Noah Smith posted a piece on "Freshwater vs. Saltwater" divides macro, but not finance.  As a mathematician the substance of the piece wasn't that interesting (a nice explanation is here) but there was a comment from Stephen Williamson that really caught my attention

Another thought. You've persisted with the view that when the science is crappy - whether because of bad data or some kind of bad equilibrium I guess - there is disagreement. ... What's at stake in finance? The flow of resources to finance people comes from Wall Street. All the Wall Street people care about is making money, so good science gets rewarded. I'm not saying that macroeconomic science is bad, only that there are plenty of opportunities for policymakers to be sold schlock macro pseudo-science.

 What I aim to do in this post is offer an explanation for the 'divide' in economics from the perspective of moral philosophy and on this basis argue that finance is not guided by science but by magic. ...

'More on the Illusion of Superiority'

Simon Wren-Lewis:

More on the illusion of superiority: For economists, and those interested in methodology Tony Yates responds to my comment on his post on microfoundations, but really just restates the microfoundations purist position. (Others have joined in - see links below.) As Noah Smith confirms, this is the position that many macroeconomists believe in, and many are taught, so it’s really important to see why it is mistaken. There are three elements I want to focus on here: the Lucas critique, what we mean by theory and time.
My argument can be put as follows: an ad hoc but data inspired modification to a microfounded model (what I call an eclectic model) can produce a better model than a fully microfounded model. Tony responds “If the objective is to describe the data better, perhaps also to forecast the data better, then what is wrong with this is that you can do better still, and estimate a VAR.” This idea of “describing the data better”, or forecasting, is a distraction, so let’s say I want a model that provides a better guide for policy actions. So I do not want to estimate a VAR. My argument still stands.
But what about the Lucas critique? ...[continue]...

[In Maui, will post as I can...]

Tuesday, December 17, 2013

'Four Missing Ingredients in Macroeconomic Models'

Antonio Fatas:

Four missing ingredients in macroeconomic models: It is refreshing to see top academics questioning some of the assumptions that economists have been using in their models. Krugman, Brad DeLong and many others are opening a methodological debate about what constitute an acceptable economic model and how to validate its predictions. The role of micro foundations, the existence of a natural state towards the economy gravitates,... are all very interesting debates that tend to be ignored (or assumed away) in academic research.

I would like to go further and add a few items to their list... In random order:

1. The business cycle is not symmetric. ... Interestingly, it was Milton Friedman who put forward the "plucking" model of business cycles as an alternative to the notion that fluctuations are symmetric. In Friedman's model output can only be below potential or maximum. If we were to rely on asymmetric models of the business cycle, our views on potential output and the natural rate of unemployment would be radically different. We would not be rewriting history to claim that in 2007 GDP was above potential in most OECD economies and we would not be arguing that the natural unemployment rate in Southern Europe is very close to its actual.

2. ...most academic research is produced around models where small and frequent shocks drive economic fluctuations, as opposed to large and infrequent events. The disconnect comes probably from the fact that it is so much easier to write models with small and frequent shocks than having to define a (stochastic?) process for large events. It gets even worse if one thinks that recessions are caused by the dynamics generated during expansions. Most economic models rely on unexpected events to generate crisis, and not on the internal dynamics that precede the crisis.

[A little bit of self-promotion: my paper with Ilian Mihov on the shape and length of recoveries presents some evidence in favor of these two hypothesis.]

3. There has to be more than price rigidity. ...

4. The notion that co-ordination across economic agents matters to explain the dynamics of business cycles receives very limited attention in academic research. ...

I am aware that they are plenty of papers that deal with these four issues, some of them published in the best academic journals. But most of these papers are not mainstream. Most economists are sympathetic to these assumption but avoid writing papers using them because they are afraid they will be told that their assumptions are ad-hoc and that the model does not have enough micro foundations (for the best criticism of this argument, read the latest post of Simon Wren-Lewis). Time for a change?

On the plucking model, see here and here.

Friday, December 13, 2013

Sticky Ideology

Paul Krugman:

Rudi Dornbusch and the Salvation of International Macroeconomics (Wonkish): ...Ken Rogoff had a very good paper on all this, in which he also says something about the state of affairs within the economics profession at the time:

The Chicago-Minnesota School maintained that sticky prices were nonsense and continued to advance this view for at least another fifteen years. It was the dominant view in academic macroeconomics. Certainly, there was a long period in which the assumption of sticky prices was a recipe for instant rejection at many leading journals. Despite the religious conviction among macroeconomic theorists that prices cannot be sticky, the Dornbusch model remained compelling to most practical international macroeconomists. This divergence of views led to a long rift between macroeconomics and much of mainstream international finance …

There are more than a few of us in my generation of international economists who still bear the scars of not being able to publish sticky-price papers during the years of new neoclassical repression.

Notice that this isn’t the evil Krugman talking; it’s the respectable Rogoff. Yet he too is in effect describing neoclassical macro as a sort of cult, actively suppressing alternative approaches. What he gets wrong is in the part I’ve elided with my “…”, in which he asserts that this is all behind us. As we saw when crisis struck, Chicago/Minnesota had in fact learned nothing and was pretty much unaware of the whole New Keynesian enterprise — and from what I hear about macro hiring, the suppression of ideas at odds with the cult remains in full force. ...

Saturday, December 07, 2013

Econometrics and 'Big Data'

I tweeted this link, and it's getting far, far more retweets than I would have expected, so I thought I'd note it here:

Econometrics and "Big Data", by Dave Giles: In this age of "big data" there's a whole new language that econometricians need to learn. ... What do you know about such things as:

  • Decision trees 
  • Support vector machines
  • Neural nets 
  • Deep learning
  • Classification and regression trees
  • Random forests
  • Penalized regression (e.g., the lasso, lars, and elastic nets)
  • Boosting
  • Bagging
  • Spike and slab regression?

Probably not enough!

If you want some motivation to rectify things, a recent paper by Hal Varian ... titled, "Big Data: New Tricks for Econometrics" ... provides an extremely readable introduction to several of these topics.

He also offers a valuable piece of advice:

"I believe that these methods have a lot to offer and should be more widely known and used by economists. In fact, my standard advice to graduate students these days is 'go to the computer science department and take a class in machine learning'."

Wednesday, December 04, 2013

'Microfoundations': I Do Not Think That Word Means What You Think It Means

'Microfoundations': I Do Not Think That Word Means What You Think It Means

Brad DeLong responds to my column on macroeconomic models:

“Microfoundations”: I Do Not Think That Word Means What You Think It Means

The basic point is this:

...New Keynesian models with more or less arbitrary micro foundations are useful for rebutting claims that all is for the best macro economically in this best of all possible macroeconomic worlds. But models with micro foundations are not of use in understanding the real economy unless you have the micro foundations right. And if you have the micro foundations wrong, all you have done is impose restrictions on yourself that prevent you from accurately fitting reality.
Thus your standard New Keynesian model will use Calvo pricing and model the current inflation rate as tightly coupled to the present value of expected future output gaps. Is this a requirement anyone really wants to put on the model intended to help us understand the world that actually exists out there? ...
After all, Ptolemy had microfoundations: Mercury moved more rapidly than Saturn because the Angel of Mercury left his wings more rapidly than the Angel of Saturn and because Mercury was lighter than Saturn…

Tuesday, December 03, 2013

One Model to Rule Them All?

Latest column:

Is There One Model to Rule Them All?: The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether? ...
The recent shake-up at the research department of the Federal Reserve Bank of Minneapolis has rekindled a discussion about the best macroeconomic model to use as a guide for policymakers. Should we use modern New Keynesian models that performed so poorly prior to and during the Great Recession? Should we return to a modernized version of the IS-LM model that was built to explain the Great Depression and answer the questions we are confronting today? Or do we need a brand new class of models altogether?  - See more at: http://www.thefiscaltimes.com/Columns/2013/12/03/There-One-Economic-Model-Rule-Them-All#sthash.WPTndtm4.dpuf

Sunday, December 01, 2013

God Didn’t Make Little Green Arrows

Paul Krugman notes work by my colleague George Evans relating to the recent debate over the stability of GE models:

God Didn’t Make Little Green Arrows: Actually, they’re little blue arrows here. In any case George Evans reminds me of paper (pdf) he and co-authors published in 2008 about stability and the liquidity trap, which he later used to explain what was wrong with the Kocherlakota notion (now discarded, but still apparently defended by Williamson) that low rates cause deflation.

The issue is the stability of the deflation steady state ("on the importance of little arrows"). This is precisely the issue George studied in his 2008 European Economic Review paper with E. Guse and S. Honkapohja. The following figure from that paper has the relevant little arrows:

Evans

This is the 2-dimensional figure (click on it for a larger version) showing the phase diagram for inflation and consumption expectations under adaptive learning (in the New Keynesian model both consumption or output expectations and inflation expectations are central). The intended steady state (marked by a star) is locally stable under learning but the deflation steady state (given by the other intersection of black curves) is not locally stable and there are nearby divergent paths with falling inflation and falling output. There is also a two page summary in George's 2009 Annual Review of Economics paper.

The relevant policy issue came up in 2010 in connection with Kocherlakota's comments about interest rates, and I got George to make a video in Sept. 2010 that makes the implied monetary policy point.

I think it would be a step forward if  the EER paper helped Williamson and others who have not understood the disequilibrium stability point. The full EER reference is Evans, George; Guse, Eran and Honkapohja, Seppo, "Liquidity Traps, Learning and Stagnation" European Economic Review, Vol. 52, 2008, 1438 – 1463.

Wednesday, November 27, 2013

'Who Made Economics?'

Daniel Little:

Who made economics?, by Daniel Little: The discipline of economics has a high level of intellectual status, even hegemony, in today’s social sciences — especially in universities in the United States. It also has a very specific set of defining models and theories that distinguish between “good” and “bad” economics. This situation suggests two topics for research: how did political economy and its successors ascend to this position of prestige in the social sciences? And how did this particular mix of techniques, problems, mathematical methods, and exemplar theoretical papers come to define the mainstream discipline? How did this governing disciplinary matrix develop and win the field?

One of the most interesting people taking on questions like these is Marion Fourcade. Her Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s was discussed in an earlier post (link). An early place where she expressed her views on these topics is in her 2001 article, “Politics, Institutional Structures, and the Rise of Economics: A Comparative Study” (link). There she describes the evolution of economics in these terms:

Since the middle of the nineteenth century, the study of the economy has evolved from a loose discursive "field," with no clear and identifiable boundaries, into a fully "professionalized" enterprise, relying on both a coherent and formalized framework, and extensive practical claims in administrative, business, and mass media institutions. (397)

And she argues that this process was contingent, path-dependent, and only loosely guided by a compass of “better” science:

Overall,contrary to the frequent assumption that economics is a universal and universally shared science, there seems to be considerable cross-national variation in (1) the and nature of the institutionalization of an economic knowledge field, (2) the forms of professional action of economists, and (3) intellectual traditions in the of economics. (398)

Fourcade approaches this subject as a sociologist; so she wants to understand the institutional and structural factors that led to the shaping and stabilization of this field of knowledge.

Understanding the relationship between the institutional and intellectual aspects of knowledge production requires, first and foremost, a historical analysis of the conditions under which a coherent domain of discourse and practice was established in the first place. (398)

A key question in this article (and in Economists and Societies) is how the differences that exist between the disciplines of economics in France, Germany, Great Britain, and the US came to be. The core of the answer that she gives rests on her analysis of the relationships that existed between practitioners and the state: "A comparison of the four cases under investigation suggests that the entrenchment of the economics profession was profoundly shaped by the relationship of its practitioners to the larger political institutions and culture of their country" (432). So differences between economics in, say, France and the United States, are to be traced back to the different ways in which academic practitioners of economic analysis and policy recommendations were situated with regard to the institutions of the state.

It is possible to treat the history of ideas internally ("systems of ideas are driven by rational discussion of their implications") and externally ("systems of ideas are driven by the social needs and institutional arrangements of a certain time"). The best sociology of knowledge avoids this dichotomy, allowing for both the idea that a field of thought advances in part through the scientific and conceptual debates that occur within it and the idea that historically specific structures and institutions have important effects on the shape and direction of the development of a field. Fourcade avoids the dichotomy by treating seriously the economic reasoning that took place at a time and place, while also searching out the institutional and structural factors that favored this approach or that in a particular national setting.

This is sociology of knowledge done at a high level of resolution. Fourcade wants to identify the mechanisms through which "societal institutions" influence the production of knowledge in the four country contexts that she studies (Germany, Great Britain, France, and the US). She does not suggest that economics lacks scientific content or that economic debates do not have a rational structure of argument. But she does argue that the configuration of the field itself was not the product of rational scientific advance and discovery, but instead was shaped by the institutions of the university and the exigencies of the societies within which it developed.

Fourcade's own work suggests a different kind of puzzle -- this time in the development of the field of the sociology of knowledge. Fourcade's topic seems to be one that is tailor-made for treatment within the terms of Bourdieu's theory of a field. And in fact some of Fourcade's analysis of the institutional factors that influenced the success or failure of academic economists in Britain, Germany, or the US fits Bourdieu's theory very well. Bourdieu's book Homo Academicus appeared in 1984 in French and 1988 in English. But Fourcade does not make use of Bourdieu's ideas at all in the 2001 article -- some 17 years after Bourdieu's ideas were published. Reference to elements of Bourdieu's approach appears only in the 2009 book. There she writes:

Bourdieu found that the social sciences occupy a very peculiar position among all scientific fields in that external factors play an especially important part in determining these fields' internal stratification and structure of authority.... Within each disciplinary field, the subjective (i.e., agentic) and objective (i.e., structural) positions of individuals are "homologous": in other words, the polar opposition between "economic" and "cultural" capital is replicated at the field's level, and mirrors the orthodoxy/heterodoxy divide. (23)

So why was Bourdieu not considered in the 2001 article? This shift in orientation may be simply a feature of the author's own intellectual development. But it may also be diagnostic of the rise of Bourdieu's influence on the sociology of knowledge in the 90's and 00's. It would be interesting to see a graph of the frequency of references to the book since 1984.

(Gabriel Abend's treatment of the differences that exist between the paradigms of sociology in the United States and Mexico is of interest here as well; link.)

Tuesday, November 05, 2013

Do People Have Rational Expectations?

New column:

Do People Have Rational Expectations?, by Mark Thoma

Not always, and economic models need to take this into account.

Wednesday, October 30, 2013

'Economics, Good and Bad'

Chris Dillow:

Economics, good and bad: Attacks on mainstream economics such as this by Aditya Chakrabortty leave me hopelessly conflicted. ...[explains why he's conflicted]...
The division that matters is not so much between heterodox and mainstream economics, but between good economics and bad. I'll give just two examples of what I mean.
First, good economics tests itself against the facts. What makes Mankiw's defence of the 1% so risible is that it ducks out of the empirical question of whether neoclassical explanations for rising inequality are actually empirically valid. Just because something could be consistent with a theory does not mean that it is.
Secondly, good economics asks: which model (or better, which mechanism or which theory) fits the problem at hand? For example, if your question is "should I invest in this high-charging actively managed fund?" you must at least take the efficient market hypothesis as your starting point. But if you're asking "are markets prone to bubbles?" you might not. As Noah says, the EMH is a great guide for investors, but not so much for policy-makers.
It's in this sense that I don't like pieces like Aditya's. Ordinary everyday economics - of the sort that's useful for real people - isn't about bigthink and meta-theorizing, but about careful consideration of the facts.

Tuesday, October 29, 2013

'Big-Data Men Rewrite Government’s Tired Economic Models'

Via Wired:

The Next Big Thing You Missed: Big-Data Men Rewrite Government’s Tired Economic Models, by Marcus Wohlsen, Wired: The Consumer Price Index is one of the country’s most closely watched economic statistics... The trouble is that it’s compiled by the U.S. government, which is still stuck in the technological dark ages. This month, the index didn’t even arrive on time, thanks to the government shutdown.
David Soloff, co-founder of a San Francisco startup called Premise, believes the country needs something better. He believes we shouldn’t have to rely on the creaky wheels of a government bureaucracy for our vital economic data.
“It’s a half-a-billion dollars of budget allocated toward this in the U.S., and they’re closed,” Soloff said when I met him earlier this month during the depths of the shutdown, before questioning the effectiveness of the system even when it’s up and running. “The U.S…has got a pretty highly evolved stats-gathering infrastructure [compared to other countries], but it’s still kind of post-World War II old-school.”
In Soloff’s view, the government’s highly centralized approach to analyzing the health of the economy isn’t just technologically antiquated. It’s doesn’t take into account how much the rest of the world has been changed by technology. At Premise, the big idea is to measure economic trends around the world on a real-time, granular level, combining the best of machine learning with a small army of on-the-ground human data collectors that can gather new information about our economy as quickly as possible. ...
This is a company of the Big Data Age. ... While its current product offerings are focused on inflation and food security data, Soloff could see the platform expand to answer questions that government bureaucracies don’t touch...

But the article doesn't address the key question of costs of the service and who will have access to these data (e.g. FRED is free to all).

Tuesday, October 15, 2013

Are Rationality and the Efficient Markets Hypothesis Useful?

Just a quick note on the efficient markets hypothesis, rationality, and all that. I view these as important contributions not because they are accurate descriptions of the world (though they may come close in some cases), but rather because they give us an important benchmark to measure departures from an ideal world. It's somewhat like studying the effects of gravity in an idealized system with no wind, etc. -- in a vacuum -- as a first step. If people say, yes, but it's always windy here, then we can account for those effects (though if we are dropping 100 pound weighs from 10 feet accounting for wind may not matter much, but if we are dropping something light from a much higher distance then we would need to incorporate these forces). Same for the efficient markets hypothesis and rationality. If people say, if effect, but it's always windy here -- those models miss important behavioral effects, e.g., -- then the models need to be amended appropriately (though, like dropping heavy weights short distances in the wind, some markets may act close enough to idealized conditions to allow these models to be used). We have not done enough to amend models to account for departures from the ideal, but that doesn't mean the ideal models aren't useful benchmarks.

Anyway, just a quick thought...

Saturday, October 12, 2013

'Nominal Wage Rigidity in Macro: An Example of Methodological Failure'

Simon Wren-Lewis:

Nominal wage rigidity in macro: an example of methodological failure: This post develops a point made by Bryan Caplan (HT MT). I have two stock complaints about the dominance of the microfoundations approach in macro. Neither imply that the microfoundations approach is ‘fundamentally flawed’ or should be abandoned: I still learn useful things from building DSGE models. My first complaint is that too many economists follow what I call the microfoundations purist position: if it cannot be microfounded, it should not be in your model. Perhaps a better way of putting it is that they only model what they can microfound, not what they see. This corresponds to a standard method of rejecting an innovative macro paper: the innovation is ‘ad hoc’.

My second complaint is that the microfoundations used by macroeconomists is so out of date. Behavioural economics just does not get a look in. A good and very important example comes from the reluctance of firms to cut nominal wages. There is overwhelming empirical evidence for this phenomenon (see for example here (HT Timothy Taylor) or the work of Jennifer Smith at Warwick). The behavioral reasons for this are explored in detail in this book by Truman Bewley, which Bryan Caplan discusses here. Both money illusion and the importance of workforce morale are now well accepted ideas in behavioral economics.

Yet debates among macroeconomists about whether and why wages are sticky go on. ...

While we can debate why this is at the level of general methodology, the importance of this particular example to current policy is huge. Many have argued that the failure of inflation to fall further in the recession is evidence that the output gap is not that large. As Paul Krugman in particular has repeatedly suggested, the reluctance of workers or firms to cut nominal wages may mean that inflation could be much more sticky at very low levels, so the current behavior of inflation is not inconsistent with a large output gap. ... Yet this is hardly a new discovery, so why is macro having to rediscover these basic empirical truths? ...

He goes on to give an example of why this matters (failure to incorporate downward nominal wage rigidity caused policymakers to underestimate the size of the output gap by a large margin, and that led to a suboptimal policy response).

Time for me to catch a plane ...

Wednesday, September 25, 2013

How Bad Data Warped the Picture of the Jobs Recovery

Matt O'Brien:

How Bad Data Warped Everything We Thought We Knew About the Jobs Recovery: You know something is really boring when economists say it is.
That's what I thought to myself when the economists at the Brookings Institution's Panel on Economic Activity said only the "serious" ones would stick around for the last paper on seasonal adjustmentzzzzzzz...
... but a funny thing happened on the way to catching up on sleep. It turns out seasonal adjustments are really interesting! They explain why, ever since Lehmangeddon, the economy has looked like it's speeding up in the winter and slowing down in the summer.
In other words, everything you've read about "Recovery Winter" the past few winters has just been a statistical artifact of naïve seasonal adjustments. Oops. ...
The BLS only looks at the past 3 years to figure out what a "typical" September (or October or November, etc.) looks like. So, if there's, say, a once-in-three-generations financial crisis in the fall, it could throw off the seasonal adjustments for quite awhile. Which is, of course, exactly what happened. ...
And that messed things up for years. Because the BLS's model thought the job losses from the financial crisis were just from winter, it thought those kind of job losses would happen every winter. And, like any good seasonal model, it tried to smooth them out. So it added jobs it shouldn't have to future winters to make up for what it expected would be big seasonal job losses. And it subtracted jobs it shouldn't have from the summer to do so. ...
Now, the one bit of good news here is this effect has already faded away for the most part. Remember, the BLS only looks back at the past 3 years of data when it comes up with its seasonal adjustments -- so the Lehman panic has fallen out of the sample.
Here are two words we should retire: Recovery Winter. It was never a thing. The economy wasn't actually accelerating when the days got shorter, nor was it decelerating when the days got longer. ... The BLS can, and should, do better.

Wednesday, September 18, 2013

Against ‘Blackboard Economics’

This is from Vox EU:

Finding his own way: Ronald Coase (1910-2013), by Steven Medema, Vox EU: Ronald Coase’s contributions to economics were much broader than most economists recognize. His work was characterized by a rejection of ‘blackboard economics’ in favor of detailed case studies and a comparative analysis of real-world institutions. This column argues that the ‘Coase theorem’ as commonly understood is in fact antithetical to Coase’s approach to economics.
...
Against ‘blackboard economics’
Coase’s criticisms of the theory of economic policy were part of a larger critique of what he often referred to as ‘blackboard economics’ – an economics where curves are shifted and equations are manipulated, with little attention to the correspondence between the theory and the real world, or to the institutions that might bear on the analysis. A similar set of concerns led to his skepticism about the application of economic analysis beyond its traditional boundaries. Contrary to popular misperception, Coase had precious little interest in the economic analysis of law. Instead, Coase’s ‘law and economics’ was concerned with how law affected the functioning of the economic system.
It is ironic, then, that the idea most closely associated with Coase, the ‘Coase theorem’, is in many respects the height of ‘blackboard economics’ and a cornerstone of the economic analysis of law. Being misunderstood was something of a hallmark of Coase’s career, as he pointed out on any number of occasions. We should all be so fortunate.

Tuesday, August 27, 2013

The Real Trouble With Economics: Sociology

Paul Krugman:

The Real Trouble With Economics: I’m a bit behind the curve in commenting on the Rosenberg-Curtain piece on economics as a non-science. What do I think of their thesis?

Well, I’m sorry to say that they’ve gotten it almost all wrong. Only “almost”: they’re entirely right that economics isn’t behaving like a science, and economists – macroeconomists, anyway – definitely aren’t behaving like scientists. But they misunderstand the nature of the failure, and for that matter the nature of such successes as we’re having....

It’s true that few economists predicted the onset of crisis. Once crisis struck, however, basic macroeconomic models did a very good job in key respects — in particular, they did much better than people who relied on their intuitive feelings. The intuitionists — remember, Alan Greenspan was supposed to be famously able to sense the economy’s pulse — insisted that budget deficits would send interest rates soaring, that the expansion of the Fed’s balance sheet would be inflationary, that fiscal austerity would strengthen economies through “confidence”. Meanwhile, wonks who relied on suitably interpreted IS-LM confidently declared that all this intuition, based on experiences in a different environment, would prove wrong — and they were right. From my point of view, these past 5 years have been a triumph for and vindication of economic modeling.

Oh, and it would be a real tragedy if the takeaway from recent events becomes that you should listen to impressive-looking guys with good tailors who stroke their chins and sound wise, and ignore the nerds; the nerds have been mostly right, while the Very Serious People have been wrong every step of the way.

Yet obviously something is deeply wrong with economics. While economists using textbook macro models got things mostly and impressively right, many famous economists refused to use those models — in fact, they made it clear in discussion that they didn’t understand points that had been worked out generations ago. Moreover, it’s hard to find any economists who changed their minds when their predictions, say of sharply higher inflation, turned out wrong. ...

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

I talked about the problem with the sociology of economics awhile back -- this is from a post in August, 2009:

In The Economist, Robert Lucas responds to recent criticism of macroeconomics ("In Defense of the Dismal Science"). Here's my entry at Free Exchange's Robert Lucas Roundtable in response to his essay:

Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".

I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.

But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless, so we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.

But we have to ask the right questions before we can build the right models.

The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...

The interesting question to me, then, is why we failed to ask the right questions. For example,... why policymakers didn't take the possibility of a major meltdown seriously. Why didn't they deliver forecasts conditional on a crisis occurring? Why didn't they ask this question of the model? Why did we only get forecasts conditional on no crisis? And also, why was the main factor that allowed the crisis to spread, the interconnectedness of financial markets, missed?

It was because policymakers couldn't and didn't take seriously the possibility that a crisis and meltdown could occur. And even if they had seriously considered the possibility of a meltdown, the models most people were using were not built to be informative on this question. It simply wasn't a question that was taken seriously by the mainstream.

Why did we, for the most part, fail to ask the right questions? Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?

It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices.

And here I think that thought leaders such as Robert Lucas and others who openly ridiculed models they disagreed with have questions they should ask themselves (e.g. Mr Lucas saying "At research seminars, people don’t take Keynesian theorizing seriously anymore; the audience starts to whisper and giggle to one another", or more recently "These are kind of schlock economics"). When someone as notable and respected as Robert Lucas makes fun of an entire line of inquiry, it influences whole generations of economists away from asking certain types of questions, some of which turned out to be important. Why was it necessary for the major leaders in macroeconomics to shut down alternative lines of inquiry through ridicule and other means rather than simply citing evidence in support of their positions? What were they afraid of? The goal is to find the truth, not win fame and fortune by dominating the debate.

We need to take a close look at how the sociology of our profession led to an outcome where people were made to feel embarrassed for even asking certain types of questions. People will always be passionate in defense of their life's work, so it's not the rhetoric itself that is of concern, the problem comes when factors such as ideology or control of journals and other outlets for the dissemination of research stand in the way of promising alternative lines of inquiry.

I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.

I think the structure of journals, which concentrates power within the profession, also influence the sociology of the profession (and not in a good way).

Wednesday, August 07, 2013

(1) Numerical Methods, (2) Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman

Robert Waldmann:

...Another thing, what about numerical methods?  Macro was totally taken over by computer simulations. This liberated it (so that anything could happen) but also ruined the fun. When computers were new and scary, simulation based macro was scary and high status. When everyone can do it, setting up a model and simulating just doesn't demonstrate brains as effectively as finding one of the two or three special cases with closed form solutions and then presenting them. Also simulating unrealistic models is really pointless. People end up staring at the computer output and trying to think up stories which explain what went on in the computer. If one is reduced to that, one might as well look at real data. Models which can't be solved don't clarify thought. Since they also don't fit the data, they are really truly madly useless.

And one more:

Pigou, Samuelson, Solow, & Friedman vs Keynes and Krugman: Thoma Bait
I might as well be honest. I am posting this here rather than at rjwaldmann.blogspot.com , because I think it is the sort of thing to which Mark Thoma links and my standing among the bears is based entirely on the fact that Thoma occasionally links to me.
I think that Pigou, Samuelson, Solow and Friedman all assumed that the marginal propensity to consume out of wealth must, on average, be higher for nominal creditors than for nominal debtors. I think this is a gross error which shows how the representative consumer (invented by Samuelson) had done devastating damage already by 1960.
The topic is the Pigou effect versus the liquidity trap. ...

Guess I should send you there to read it.

Friday, July 12, 2013

'Revolutionizing Economics by Evolutionizing it'

Apparently, economics is about to be revolutionized:

Revolutionizing Economics by Evolutionizing it, by Jab Bhalla, Scientific American: Economics will soon be revolutionized, by being evolutionized, again. This time with fewer unnaturally selective ideas. Scholars, like those working with the Evolution Institute, are adapting the assumptions, methods, and goals of economics to better fit empirically observed humans. Our survival foreseeably requires it. ...

I see these stories periodically, sometimes it's physicists, this time it's evolutionary theorists, but somehow the revolution never comes. Perhaps it's because many of them don't actually understand economics. For example:

The appetites and capacities of everything in biology are physiologically limited. Beyond some satiety level, more isn’t better—it’s often unhealthy and counterproductive. By contrast, self-interest in economics is considered limitless. Every extra dollar gained is better. But that’s a numerical illusion. An extra dollar’s benefit is by circumstances (an idea used only peripherally, e.g. in diminishing returns). Unlimitedness is deeply unnatural. It ignores the foreseeable capacities of systems we depend on.

Diminishing marginal utility is only a peripheral idea? Microeconomists have never heard of or considered satiety and bliss points? And apparently we've never heard of market failure either:

...economic self-interest has become equally unintelligent. It’s often self-undermining, as in Prisoner’s Dilemma games, and in the global climate “tragedy of the commons.”

Yes, we just ignore the tragedy of the commons and have done no work at all to model it theoretically, and then figure out how best to regulate these situations.

I am open to new ideas, and I don't have problems with statements like this:

 Limits, intelligent foresight, self-deficiency, interdependent coordination, and relational rationality are in our nature. They should all be in our economics. And in our rational self-interest, rightly understood. Economics needn’t be as dumb as trees or as self-harming as over-hunters.

But acting like we have learned nothing from behavioral economics, ignored it entirely, whatever, is wrong. Critics should at least understand the basics.

We cannot build one grand model to fit all situations. It would be intractable. Instead, we build models to answer specific questions. The trick is knowing when to use a particular model. Often -- most of the time -- the standard microeconmic model with no bliss points, rational agents, pure competition, and so on does pretty well at predicting behavior. That's why it's still around. But we know very well that this model doesn't always work (and I am distinguishing between micro and macro here -- the focus in the article seems to be on micro issues). Other times we may need to use a model with a bliss point, a departure from textbook rationality, or a market failure of some type. We have a pretty good idea about how to do that (though I think we fall down a bit in knowing precisely when to alter the rationality assumption to incorporate what we have learned from behavioral economics).

Finally, contrary to the impression the article gives, we we don't ignore evolution as this 1996 article from Paul Krugman, and many more by many others, will attest:

What Economists Can Learn from Evolutionary Theorists, European Association for Evolutionary Political Economy, Paul Krugman, Nov. 1996: Good morning. I am both honored and a bit nervous to be speaking to a group devoted to the idea of evolutionary political economy. As you probably know, I am not exactly an evolutionary economist. I like to think that I am more open-minded about alternative approaches to economics than most, but I am basically a maximization-and-equilibrium kind of guy. Indeed, I am quite fanatical about defending the relevance of standard economic models in many situations.
Why, then, am I here? Well, partly because my research work has taken me to some of the edges of the neoclassical paradigm. When you are concerned, as I have been, with situations in which increasing returns are crucial, you must drop the assumption of perfect competition; you are also forced to abandon the belief that market outcomes are necessarily optimal, or indeed that the market can be said to maximize anything. You can still believe in maximizing individuals and some kind of equilibrium, but the complexity of the situations in which your imaginary agents find themselves often obliges you - and presumably them - to represent their behavior by some kind of ad hoc rule rather than as the outcome of a carefully specified maximum problem. And you are often driven by sheer force of modeling necessity to think of the economy as having at least vaguely "evolutionary" dynamics, in which initial conditions and accidents along the way may determine where you end up. Some of you may have read my work on economic geography; I only found out after I had worked on the models for some time that I was using "replicator dynamics" to discuss the problem of economic change.
But there is another reason I am here. I am an economist, but I am also what we might call an evolution groupie. That is, I spend a great deal of time reading what evolutionary biologists write - not only the more popular volumes but the textbooks and, most recently, some of the professional articles. I have even tried to talk to some of the biologists, which in this age of narrow specialization is a major effort. My interest in evolution is partly a recreation; but it is also true that I find in evolutionary biology a useful vantage point from which to view my own specialty in a new perspective. In a way, the point is that both the parallels and the differences between economics and evolutionary biology help me at least to understand what I am doing when I do economics - to get, to be pompous about it, a new perspective on the epistemology of the two fields.
I am sure that I am not unique either in my interest in biology or in my feeling that we economists have something to learn from it. Indeed, I am sure that many people in this room know far more about evolutionary theory than I do. But I may have one special distinction. Most economists who try to apply evolutionary concepts start from some deep dissatisfaction with economics as it is. I won't say that I am entirely happy with the state of economics. But let us be honest: I have done very well within the world of conventional economics. I have pushed the envelope, but not broken it, and have received very widespread acceptance for my ideas. What this means is that I may have more sympathy for standard economics than most of you. My criticisms are those of someone who loves the field and has seen that affection repaid. I don't know if that makes me morally better or worse than someone who criticizes from outside, but anyway it makes me different.
Anyway, enough preliminaries. ...

Continue reading "'Revolutionizing Economics by Evolutionizing it'" »

Tuesday, July 02, 2013

Economics Is Better (In Some Ways) Than It Used To Be

Frances Woolley (there is also a discussion of each point and counterpoint):

Economics is Better (in some ways) Than it Used to Be: The discipline of economics is in better shape today than it was in the 1970s, 80s and 90s. Here are five reasons why:
1. Now, economists test their theories. ...
2. Now, economists are better at establishing causality. ...
3. Now, economics is (somewhat) more open to a range of ideologies and methodologies ...
4. Now, economics is engaging ...
5. Now, economic research is (in some ways) more democratic. ...
Still, every silver lining has a cloud. Each one of these positive trends has a not-so-positive flip side.
1*. Economists test their theories. But only some test results get published. ...
2*. Causality isn't everything. Correlations are interesting too. ...
3*. Economics is not that open to methodological diversity. ...
4*. Public engagement makes the senior administration happy, but no one ever got tenure by blogging. ...
5*. Old barriers have been replaced by new ones. ...
So the profession is far from perfect. But it is better than it used to be. ...

Saturday, June 29, 2013

'DSGE Models and Their Use in Monetary Policy'

Mike Dotsey at the Philadelphia Fed:

DSGE Models and Their Use in Monetary Policy: The past 10 years or so have witnessed the development of a new class of models that are proving useful for monetary policy: dynamic stochastic general equilibrium (DSGE) models. The pioneering central bank, in terms of using these models in the formulation of monetary policy, is the Sveriges Riksbank, the central bank of Sweden. Following in the Riksbank’s footsteps, a number of other central banks have incorporated DSGE models into the monetary policy process, among them the European Central Bank, the Norge Bank (Norwegian central bank), and the Federal Reserve.
This article will discuss the major features of DSGE models and why these models are useful to monetary policymakers. It will indicate the general way in which they are used in conjunction with other tools commonly employed by monetary policymakers. ...

Sunday, June 02, 2013

The Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere

Peter Dorman would like to know if he's wrong:

Why You Don’t See the Aggregate Supply—Aggregate Demand Model in the Econ Blogosphere: Introductory textbooks are supposed to give you simplified versions of the models that professionals use in their own work. The blogosphere is a realm where people from a range of backgrounds discuss current issues often using simplified concepts so everyone can be on the same page.
But while the dominant framework used in introductory macro textbooks is aggregate supply—aggregate demand (AS-AD), it is almost never mentioned in the econ blogs. My guess is that anyone who tried to make an argument about current macropolicy using an AS-AD diagram would just invite snickers. This is not true on the micro side, where it’s perfectly normal to make an argument with a standard issue, partial equilibrium supply and demand diagram. What’s going on here?
I’ve been writing the part of my textbook where I describe what happened in macro during the period from the mid 70s to the mid 00s, and part of the story is the rise of textbook AS-AD. Here’s the line I take:
The dominant macro model, now crystallized in DSGE, is much too complex for intro students. It is based on intertemporal optimization and general equilibrium theory. There is no possible way to explain it to students in their first exposure to economics. But the mainstream has rejected the old income-expenditure models that graced intro texts in the 1970s and were, in skeleton form, the basis for the forecasting models used back in those days. So what to do?
The solution has been to use AS-AD as a placeholder. It allows instructors to talk about both prices and quantities in a rough market context. By putting Y on one axis and P on another, you can locate any macroeconomic outcome in the upper-right quadrant. It gets students “thinking like economists”.
Unfortunately the model is unsound. If you dig into it you find contradictions that can’t be papered over. One example is that the AS curve depends on the idea that input prices for firms systematically lag output prices, but do you really want to argue the theoretical and empirical case for this? Or try the AD assumption that, even as the price level and real output in the economy go up or down, the money supply remains fixed.
That’s why AS-AD is simply a placeholder. It has no intrinsic value as an economic model. No one uses it for policy purposes. It can’t be found in the econ blogs. It’s not a stripped down version of DSGE. Its only role is to occupy student brain cells until the real work of macroeconomic instruction can begin in a more advanced course.
If I’m wrong I’d like to know before I cut off all lines of retreat.

This won't fully answer the question (many DSGE adherents deny the existence of something called an AD curve), but here are a few counterexamples. One from today (here), and two from the past (via Tim Duy here and here).

Update: Paul Krugman comments here.

Wednesday, May 29, 2013

'DSGE + Financial Frictions = Macro that Works?'

This is a brief follow-up to this post from Noah Smith (see this post for the abstract to the Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide paper he discusses):

DSGE + financial frictions = macro that works?: In my last post, I wrote:

So far, we don't seem to have gotten a heck of a lot of a return from the massive amount of intellectual capital that we have invested in making, exploring, and applying [DSGE] models. In principle, though, there's no reason why they can't be useful.

One of the areas I cited was forecasting. In addition to the studies I cited by Refet Gurkaynak, many people have criticized macro models for missing the big recession of 2008Q4-2009. For example, in this blog post, Volker Wieland and Maik Wolters demonstrate how DSGE models failed to forecast the big recession, even after the financial crisis itself had happened...

This would seem to be a problem.

But it's worth it to note that, since the 2008 crisis, the macro profession does not seem to have dropped DSGE like a dirty dishrag. ... Why are they not abandoning DSGE? Many "sociological" explanations are possible, of course - herd behavior, sunk cost fallacy, hysteresis and heterogeneous human capital (i.e. DSGE may be all they know how to do), and so on. But there's also another possibility, which is that maybe DSGE models, augmented by financial frictions, really do have promise as a technology.

This is the position taken by Marco Del Negro, Marc P. Giannoni, and Frank Schorfheide of the New York Fed. In a 2013 working paper, they demonstrate that a certain DSGE model was able to forecast the big post-crisis recession.

The model they use is a combination of two existing models: 1) the famous and popular Smets-Wouters (2007) New Keynesian model that I discussed in my last post, and 2) the "financial accelerator" model of Bernanke, Gertler, and Gilchrist (1999). They find that this hybrid financial New Keynesian model is able to predict the recession pretty well as of 2008Q3! Check out these graphs (red lines are 2008Q3 forecasts, dotted black lines are real events):

I don't know about you, but to me that looks pretty darn good!
I don't want to downplay or pooh-pooh this result. I want to see this checked carefully, of course, with some tables that quantify the model's forecasting performance, including its long-term forecasting performance. I will need more convincing, as will the macroeconomics profession and the world at large. And forecasting is, of course, not the only purpose of macro models. But this does look really good, and I think it supports my statement that "in principle, there is no reason why [DSGEs] can't be useful." ...
However, I do have an observation to make. The Bernanke et al. (1999) financial-accelerator model has been around for quite a while. It was certainly around well before the 2008 crisis. And we had certainly had financial crises before, as had many other countries. Why was the Bernanke model not widely used to warn of the economic dangers of a financial crisis? Why was it not universally used for forecasting? Why are we only looking carefully at financial frictions after they blew a giant gaping hole in the world economy?
It seems to me that it must have to do with the scientific culture of macroeconomics. If macro as a whole had demanded good quantitative results from its models, then people would not have been satisfied with the pre-crisis finance-less New Keynesian models, or with the RBC models before them. They would have said "This approach might work, but it's not working yet, let's keep changing things to see what does work." Of course, some people said this, but apparently not enough.
Instead, my guess is that many people in the macro field were probably content to use DSGE models for storytelling purposes, and had little hope that the models could ever really forecast the actual economy. With low expectations, people didn't push to improve the existing models as hard as they might have. But that is just my guess; I wasn't really around.
So to people who want to throw DSGE in the dustbin of history, I say: You might want to rethink that. But to people who view the del Negro paper as a vindication of modern macro theory, I say: Why didn't we do this back in 2007? And are we condemned to "always fight the last war"?

My take on why these models weren't used is a bit different.

My argument all along has been that we had the tools and models to explain what happened, but we didn't understand that this particular combination of models -- standard DSGE augmented by financial frictions -- was the important model to use. As I'll note below, part of the reason was empirical -- the evidenced did matter (though it was not interpreted correctly) -- but the bigger problem was that our arrogance caused us to overlook the important questions.

There are many, many "modules" we can plug into a model to make it do various things. Need to propagate a shock, i.e. make it persist over time? Toss in an adjustment cost of some sort (there are other ways to do this as well). Do you need changes in monetary policy to affect real output? Insert a price, wage, or information friction. And so on.

Unfortunately, adding every possible complication to make one grand model that explains everything is way too hard and complex. That's not possible. Instead, depending upon the questions we ask, we put these pieces together in particular ways to isolate the important relationships, and ignore the more trivial ones. This is the art of model building, to isolate what is important and provide insight into the question of interest.

We could have put the model described above together before the crisis, all of the pieces were there, and some people did things along these lines. But this was not the model most people used. Why? Because we didn't think the question was important. We didn't think that financial frictions were an important feature of modern business cycles because technology and deregulation had mostly solved this problem. If the banking system couldn't collapse, why build and emphasize models that say it will? (The empirical evidence for the financial frictions channel was a bit wobbly, and that was also part of the reason these models were not emphasized. But that evidence was based upon normal times, not deep recessions, and it didn't tell us as much as we thought about the usefulness of models that incorporate financial frictions.)

Ex-post, it's easy to look back and say aha -- this was the model that would have worked. Ex-ante, the problem is much harder. Will the next big recession be driven by a financial collapse? If so, then a model like this might be useful. But what if the shock comes from some other source? Is that shock in the model? When the time comes, will we be asking the right questions, and hence building models that can help to answer them, or will we be focused on the wrong thing -- fighting the last war? We have the tools and techniques to build all sorts of models, but they won't do us much good if we aren't asking the right questions.

How do we do that? We must have a strong sense of history, I think, at a minimum be able to look back and understand how various economic downturns happened and be sure those "modules" are in the baseline model. And we also need to have the humility to understand that we probably haven't progressed so much that it (e.g. a financial collapse) can't happen again. History alone is not enough, of course, new things can always happen -- things where history provides little guidance -- but we should at least incorporate things we know can be problematic.

It wasn't our tools and techniques that failed us prior to the Great Recession. It was our arrogance, our belief that we had solved the problem of financial meltdowns through financial innovation, deregulation, and the like that closed our eyes to the important questions we should have been asking. We are asking them now, but is that enough? What else should we be asking?

Thursday, May 16, 2013

New Research in Economics: Robust Stability of Monetary Policy Rules under Adaptive Learning

I have had several responses to my offer to post write-ups of new research that I'll be posting over the next few days (thanks!), but I thought I'd start with a forthcoming paper from a former graduate student here at the University of Oregon, Eric Guass:

Robust Stability of Monetary Policy Rules under Adaptive Learning, by Eric Gaus, forthcoming, Southern Economics Journal: Adaptive learning has been used to assess the viability a variety of monetary policy rules. If agents using simple econometric forecasts "learn" the rational expectations solution of a theoretical model, then researchers conclude the monetary policy rule is a viable alternative. For example, Duffy and Xiao (2007) find that if monetary policy makers minimize a loss function of inflation, interest rates, and the output gap, then agents in a simple three equation model of the macroeconomy learn the rational expectations solution. On the other hand, Evans and Honkapohja (2009) demonstrates that this may not always be the case. The key difference between the two papers is an assumption over what information the agents of the model have access to. Duffy and Xiao (2007) assume that monetary policy makers have access to contemporaneous variables, that is, they adjust interest rates to current inflation and output. Evans and Honkapohja (2009) instead assume that agents only can form expectations of contemporaneous variables. Another difference between these two papers is that in Duffy and Xiao (2007) agents use all the past data they have access to, whereas in Evans and Honkapohja (2009) agents use a fixed window of data.
This paper examines several different monetary policy rules under a learning mechanism that changes how much data agents are using. It turns out that as long as the monetary policy makers are able to see contemporaneous endogenous variables (output and inflation) then the Duffy and Xiao (2007) results hold. However, if agents and policy makers use expectations of current variables then many of the policy rules are not "robustly stable" in the terminology of Evans and Honkapohja (2009).
A final result in the paper is that the switching learning mechanism can create unpredictable temporary deviations from rational expectations. This is a rather starting result since the source of the deviations is completely endogenous. The deviations appear in a model where there are no structural breaks or multiple equilibria or even an intention of generating such deviations. This result suggests that policymakers should be concerned with the potential that expectations, and expectations alone, can create exotic behavior that temporarily strays from the REE.

Wednesday, May 08, 2013

What is Wrong (and Right) in Economics?

Dani Rodrik:

What is wrong (and right) in economics?, by Dani Rodrik: The World Economics Association recently interviewed me on the state of economics, inquiring about my views on pluralism in the profession. You can find the result on the WEA's newsletter here (the interview starts on page 9). I reproduce it below. ...

Saturday, May 04, 2013

'Microfounded Social Welfare Functions'

This is very wonkish, but it's also very important. The issue is whether DSGE models used for policy analysis can properly capture the relative costs of deviations of inflation and output from target. Simon Wren-Lewis argues -- and I very much agree -- that the standard models are not a very good guide to policy because they vastly overstate the cost of inflation relative to the cost of output (and employment) fluctuations (see the original for the full argument and links to source material):

Microfounded Social Welfare Functions, by Simon Wren-Lewis: More on Beauty and Truth for economists

... Woodford’s derivation of social welfare functions from representative agent’s utility ... can tell us some things that are interesting. But can it provide us with a realistic (as opposed to model consistent) social welfare function that should guide many monetary and fiscal policy decisions? Absolutely not. As I noted in that recent post, these derived social welfare functions typically tell you that deviations of inflation from target are much more important than output gaps - ten or twenty times more important. If this was really the case, and given the uncertainties surrounding measurement of the output gap, it would be tempting to make central banks pure (not flexible) inflation targeters - what Mervyn King calls inflation nutters.

Where does this result come from? ... Many DSGE models use sticky prices and not sticky wages, so labour markets clear. They tend, partly as a result, to assume labour supply is elastic. Gaps between the marginal product of labor and the marginal rate of substitution between consumption and leisure become small. Canzoneri and coauthors show here how sticky wages and more inelastic labour supply will increase the cost of output fluctuations... Canzoneri et al argue that labour supply inelasticity is more consistent with micro evidence.

Just as important, I would suggest, is heterogeneity. The labour supply of many agents is largely unaffected by recessions, while others lose their jobs and become unemployed. Now this will matter in ways that models in principle can quantify. Large losses for a few are more costly than the same aggregate loss equally spread. Yet I believe even this would not come near to describing the unhappiness the unemployed actually feel (see Chris Dillow here). For many there is a psychological/social cost to unemployment that our standard models just do not capture. Other evidence tends to corroborate this happiness data.

So there are two general points here. First, simplifications made to ensure DSGE analysis remains tractable tend to diminish the importance of output gap fluctuations. Second, the simple microfoundations we use are not very good at capturing how people feel about being unemployed. What this implies is that conclusions about inflation/output trade-offs, or the cost of business cycles, derived from microfounded social welfare functions in DSGE models will be highly suspect, and almost certainly biased.

Now I do not want to use this as a stick to beat up DSGE models, because often there is a simple and straightforward solution. Just recalculate any results using an alternative social welfare function where the cost of output gaps is equal to the cost of inflation. For many questions addressed by these models results will be robust, which is worth knowing. If they are not, that is worth knowing too. So its a virtually costless thing to do, with clear benefits.

Yet it is rarely done. I suspect the reason why is that a referee would say ‘but that ad hoc (aka more realistic) social welfare function is inconsistent with the rest of your model. Your complete model becomes internally inconsistent, and therefore no longer properly microfounded.’ This is so wrong. It is modelling what we can microfound, rather than modelling what we can see. Let me quote Caballero...

“[This suggests a discipline that] has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one.”

As I have argued before (post here, article here), those using microfoundations should be pragmatic about the need to sometimes depart from those microfoundations when there are clear reasons for doing so. (For an example of this pragmatic approach to social welfare functions in the context of US monetary policy, see this paper by Chen, Kirsanova and Leith.) The microfoundation purist position is a snake charmer, and has to be faced down.


[1] Lucas, R. E., 2003, Macroeconomic Priorities, American Economic Review 93(1): 1-14.

Friday, May 03, 2013

Romer and Stiglitz on the State of Macroeconomics

Two essays on the state of macroeconomics:

First, David Romer argues our recent troubles are an extreme version of an ongoing problem:

... As I will describe, my reading of the evidence is that the events of the past few years are not an aberration, but just the most extreme manifestation of a broader pattern. And the relatively modest changes of the type discussed at the conference, and that in some cases policymakers are putting into place, are helpful but unlikely to be enough to prevent future financial shocks from inflicting large economic harms.
Thus, I believe we should be asking whether there are deeper reforms that might have a large effect on the size of the shocks emanating from the financial sector, or on the ability of the economy to withstand those shocks. But there has been relatively little serious consideration of ideas for such reforms, not just at this conference but in the broader academic and policy communities. ...

He goes on to describe some changes he'd like to see, for example:

I was disappointed to see little consideration of much larger financial reforms. Let me give four examples of possible types of larger reforms:

  • There were occasional mentions of very large capital requirements. For example, Allan Meltzer noted that at one time 25 percent capital for was common for banks. Should we be moving to such a system?
  • Amir Sufi and Adair Turner talked about the features of debt contracts that make them inherently prone to instability. Should we be working aggressively to promote more indexation of debt contracts, more equity-like contracts, and so on?
  • We can see the costs that the modern financial system has imposed on the real economy. It is not immediately clear that the benefits of the financial innovations of recent decades have been on a scale that warrants those costs. Thus, might a much simpler, 1960s- or 1970s-style financial system be better than what we have now?
  • The fact that shocks emanating from the financial system sometimes impose large costs on the rest of the economy implies that there are negative externalities to some types of financial activities or financial structures, which suggests the possibility of Pigovian taxes.

So, should there be substantial taxes on certain aspects of the financial system? If so, what should be taxed – debt, leverage, size, other indicators of systemic risk, a combination, or something else altogether?

Larger-scale solutions on the macroeconomic side ...

After a long discussion, he concludes with:

After five years of catastrophic macroeconomic performance, “first steps and early lessons” – to quote the conference title – is not what we should be aiming for. Rather, we should be looking for solutions to the ongoing current crisis and strong measures to minimize the chances of anything similar happening again. I worry that the reforms we are focusing on are too small to do that, and that what is needed is a more fundamental rethinking of the design of our financial system and of our frameworks for macroeconomic policy.

Second, Joe Stiglitz:

In analyzing the most recent financial crisis, we can benefit somewhat from the misfortune of recent decades. The approximately 100 crises that have occurred during the last 30 years—as liberalization policies became dominant—have given us a wealth of experience and mountains of data. If we look over a 150 year period, we have an even richer data set.
With a century and half of clear, detailed information on crisis after crisis, the burning question is not How did this happen? but How did we ignore that long history, and think that we had solved the problems with the business cycle Believing that we had made big economic fluctuations a thing of the past took a remarkable amount of hubris....

In his lengthy essay, he goes on to discuss:

Markets are not stable, efficient, or self-correcting

  • The models that focused on exogenous shocks simply misled us—the majority of the really big shocks come from within the economy.
  • Economies are not self-correcting.

More than deleveraging, more than a balance sheet crisis: the need for structural transformation

  • The fact that things have often gone badly in the aftermath of a financial crisis doesn’t mean they must go badly.

Reforms that are, at best, half-way measures

  • The reforms undertaken so far have only tinkered at the edges.
  • The crisis has brought home the importance of financial regulation for macroeconomic stability.

Deficiencies in reforms and in modeling

  • The importance of credit
    • A focus on the provision of credit has neither been at the center of policy discourse nor of the standard macro-models.
    • There is also a lack of understanding of different kinds of finance.
  • Stability
  • Distribution

Policy Frameworks

  • Flawed models not only lead to flawed policies, but also to flawed policy frameworks.
  • Should monetary policy focus just on short term interest rates?
  • Price versus quantitative interventions
  • Tinbergen

Stiglitz ends with:

Take this chance to revolutionize flawed models
It should be clear that we could have done much more to prevent this crisis and to mitigate its effects. It should be clear too that we can do much more to prevent the next one. Still, through this conference and others like it, we are at least beginning to clearly identify the really big market failures, the big macroeconomic externalities, and the best policy interventions for achieving high growth, greater stability, and a better distribution of income.
To succeed, we must constantly remind ourselves that markets on their own are not going to solve these problems, and neither will a single intervention like short-term interest rates. Those facts have been proven time and again over the last century and a half.
And as daunting as the economic problems we now face are, acknowledging this will allow us to take advantage of the one big opportunity this period of economic trauma has afforded: namely, the chance to revolutionize our flawed models, and perhaps even exit from an interminable cycle of crises.

Thursday, May 02, 2013

'Economics Needs Replication'

Via Eric Weiner at INET, and continuing a recent discussion, this is Jan Höffler and Thomas Kneib on replication in economics:

Economics Needs Replication, by INET Grantees Jan Höffler and Thomas Kneib: The recent debate about the reproducibility of the results published by Carmen Reinhart and Kenneth Rogoff offers a showcase for the importance of replication in empirical economics.
Replication is basically the independent repetition of scientific analyses by other scientists... The principle is well accepted in the natural sciences. However, it is far less common in empirical economics, even though non-reproducible research can barely be considered a contribution to the consolidated body of scientific knowledge. ...
In the narrow sense, replicability means that the raw data for an analysis can be accessed, that the transformation from the raw data to the final data set is well documented, and that software code is available for producing the final data set and the empirical results. Basically, this comes down to a question of data and code availability, but nonetheless it is a necessary prerequisite for replication. A successful replication would then indicate that all of the material has been provided and that the same results were obtained when redoing the analysis.
In the wider sense, a replication could go much further by challenging previous analyses via changing the data sources (such as changing countries, switching time periods, or using different surveys), altering the statistical or econometric model, or questioning the interpretational conclusions drawn from the analysis. Here, the scientific debate really starts, since this type of replication isn’t concerned with simply redoing exactly the same analysis as the original study. Rather, the goal is to rethink the entire analysis, from data collection and operationalization to the interpretation of results and robustness checks.
Unfortunately, very few journals in economics have mandatory online archives for data and code and/or strict rules for ensuring replicability. Moreover, the incentive for making your own research reproducible, and for reproducing research done by others, is low.
In this respect, there are several interesting lessons to be learned by the Reinhart/Rogoff case.
One is that the impact of replication can actually be quite high, especially when replicating papers that have been influential... Still, it is important to remember that replications that question earlier results are not the only ones that are of value. It is also helpful to know if a specific study could be replicated...
Another important lesson is that involving students in replications can significantly change attitudes towards replication. For students, a replication is a perfect opportunity to perform their own analyses based on an already available paper. They get to learn how experienced scientists tackle applied-research questions and they also learn that the consolidated body of scientific knowledge is constantly changing as it is questioned and transferred to new contexts.
Finally, it is very important for the raw data to be made available so that every step up to the final results of a study can be replicated. ...
In recent years, we have been teaching replication to students at all levels (ranging from undergraduates to Ph.D. candidates) and have set up a large global network to support the idea of replication by students. ...
As a part of our INET project on empirical replication, we therefore are collecting and sharing a large dataset of empirical studies. These studies are all potential candidates for replication that meet the minimal requirements for replicability. Information on these studies, as well as additional information about already published replications, is available in a wiki shared with collaborators who join our teaching initiative. Moreover, we will soon provide via the same wiki website additional resources to support teaching replication seminars. We also started a working paper series on replication so that replication papers can be published as reports and provide a forum for discussing replicability as another part of the wiki website.
We welcome you to join our efforts... You can find more information and contact us here and here.

One place where replication occurs regularly is assignments in graduate classes. I routinely ask students to replicate papers as part of their coursework. Even if they don't find explicit errors (and most of the time they don't), it almost always raises good questions about the research (why this choice, this model, what if you relax this assumption, there's a better way to do this,here's the next question to ask, etc., etc.). So replication doers occur routinely in economics, and it is very valuable, but it is not a formal part of the profession the way it should be, and much of the replication is done by people (students) who generally assume that if they can't replicate something, it is probably their error. We have a lot of work to do on the replication front, and I want to encourage efforts like this.

Tuesday, April 23, 2013

A New and Improved Macroeconomics

New column:

A New and Improved Macroeconomics, by Mark Thoma: Macroeconomics has not fared well in recent years. The failure of standard macroeconomic models during the financial crisis provided the first big blow to the profession, and the recent discovery of the errors and questionable assumptions in the work of Reinhart and Rogoff further undermined the faith that people have in our models and empirical methods.
What will it take for the macroeconomics profession to do better? ...

Wednesday, April 17, 2013

Empirical Methods and Progress in Macroeconomics

The blow-up over the Reinhart-Rogoff results reminds me of a point I’ve been meaning to make about our ability to use empirical methods to make progress in macroeconomics. This isn't about the computational mistakes that Reinhart and Rogoff made, though those are certainly important, especially in small samples, it's about the quantity and quality of the data we use to draw important conclusions in macroeconomics.

Everybody has been highly critical of theoretical macroeconomic models, DSGE models in particular, and for good reason. But the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another, i.e. to sort between these imaginative constructions. We like to think or ourselves as scientists, but if data can’t settle our theoretical disputes – and it doesn’t appear that it can – then our claim for scientific validity has little or no merit.

There are many reasons for this. For example, the use of historical rather than “all else equal” laboratory/experimental data makes it difficult to figure out if a particular relationship we find in the data reveals an important truth rather than a chance run that mimics a causal relationship. If we could do repeated experiments or compare data across countries (or other jurisdictions) without worrying about the “all else equal assumption” we’d could perhaps sort this out. It would be like repeated experiments. But, unfortunately, there are too many institutional differences and common shocks across countries to reliably treat each country as an independent, all else equal experiment. Without repeated experiments – with just one set of historical data for the US to rely upon – it is extraordinarily difficult to tell the difference between a spurious correlation and a true, noteworthy relationship in the data.

Even so, if we had a very, very long time-series for a single country, and if certain regularity conditions persisted over time (e.g. no structural change), we might be able to answer important theoretical and policy questions (if the same policy is tried again and again over time within a country, we can sort out the random and the systematic effects). Unfortunately, the time period covered by a typical data set in macroeconomics is relatively short (so that very few useful policy experiments are contained in the available data, e.g. there are very few data points telling us how the economy reacts to fiscal policy in deep recessions).

There is another problem with using historical as opposed to experimental data, testing theoretical models against data the researcher knows about when the model is built. In this regard, when I was a new assistant professor Milton Friedman presented some work at a conference that impressed me quite a bit. He resurrected a theoretical paper he had written 25 years earlier (it was his plucking model of aggregate fluctuations), and tested it against the data that had accumulated in the time since he had published his work. It’s not really fair to test a theory against historical macroeconomic data, we all know what the data say and it would be foolish to build a model that is inconsistent with the historical data it was built to explain – of course the model will fit the data, who would be impressed by that? But a test against data that the investigator could not have known about when the theory was formulated is a different story – those tests are meaningful (Friedman’s model passed the test using only the newer data).

As a young time-series econometrician struggling with data/degrees of freedom issues I found this encouraging. So what if in 1986 – when I finished graduate school – there were only 28 quarterly observations for macro variables (112 total observations, reliable data on money, which I almost always needed, doesn’t begin until 1959). By, say, the end of 2012 there would be almost double that amount (216 versus 112!!!). Asymptotic (plim-type) results here we come! (Switching to monthly data doesn’t help much since it’s the span of the data – the distance between the beginning and the end of the sample – rather than the frequency the data are sampled that determines many of the “large-sample results”).

By today, I thought, I would have almost double the data I had back then and that would improve the precision of tests quite a bit. I could also do what Friedman did, take really important older papers that give us results “everyone knows” and see if they hold up when tested against newer data.

It didn’t work out that way. There was a big change in the Fed’s operating procedure in the early 1980s, and because of this structural break today 1984 is a common starting point for empirical investigations (start dates can be anywhere in the 79-84 range though later dates are more common). Data before this time-period are discarded.

So, here we are 25 years or so later and macroeconomists don’t have any more data at our disposal than we did when I was in graduate school. And if the structure of the economy keeps changing – as it will – the same will probably be true 25 years from now. We will either have to model the structural change explicitly (which isn’t easy, and attempts to model structural beaks often induce as much uncertainty as clarity), or continually discard historical data as time goes on (maybe big data, digital technology, theoretical advances, etc. will help?).

The point is that for a variety of reasons – the lack of experimental data, small data sets, and important structural change foremost among them – empirical macroeconomics is not able to definitively say which competing model of the economy best explains the data. There are some questions we’ve been able to address successfully with empirical methods, e.g., there has been a big change in views about the effectiveness of monetary policy over the last few decades driven by empirical work. But for the most part empirical macro has not been able to settle important policy questions. The debate over government spending multipliers is a good example. Theoretically the multiplier can take a range of values from small to large, and even though most theoretical models in use today say that the multiplier is large in deep recessions, ultimately this is an empirical issue. I think the preponderance of the empirical evidence shows that multipliers are, in fact, relatively large in deep recessions – but you can find whatever result you like and none of the results are sufficiently definitive to make this a fully settled issue.

I used to think that the accumulation of data along with ever improving empirical techniques would eventually allow us to answer important theoretical and policy questions. I haven’t completely lost faith, but it’s hard to be satisfied with our progress to date. It’s even more disappointing to see researchers overlooking these well-known, obvious problems – for example the lack pf precision and sensitivity to data errors that come with the reliance on just a few observations – to oversell their results.

Friday, April 05, 2013

What is the Role of Psychological Considerations in Economics? - INET

I went to a different breakout session, but heard very good things about this one on "psychological considerations in economics":

Many of our global problems – from climate change to financial crises – arise from people’s failure to cooperate adequately to achieve socially desirable outcomes. There is a widespread recognition that we need a deeper understanding of human nature in order to discern new opportunities for human cooperation. The Kiel Institute and the Max Planck Institute in Leipzig are developing an interdisciplinary program with INET to examine new avenues of how psychological and neuroscientific knowledge about human motivation, emotion and social cognition can inform models of economic decision making. How can a profounder understanding of human motivation and preferences lead to a broader appreciation of our prospects for pro-social and sustainable economic behaviors?

  • David Tuckett - Training and Supervising Analyst in the British Psychoanalytical Society 
  • Inske Pirschel - Research Assistant, Christian-Albrechts University of Kiel 
  • Gert Pönitzsch - Research Assistant, Kiel Institute for the World Economy 
  • Cars Hommes - Professor of Economics Universiteit van Amsterdam
  • Moderator: Dennis Snower - President, Kiel Institute for the World Economy

Friday, March 22, 2013

'Inequality, Evolution, & Complexity'

Chris Dillow:

Inequality, Evolution, & Complexity: Why has mainstream neoclassical economics traditionally had little to say about the causes and effects of inequality? This is the question raised in an interesting new paper by Brendan Markey-Towler and John Foster.

They suggest that the blindness is inherent in the very structure of the discipline. If you think of representative agents maximizing utility in a competitive environment, inequality has nowhere to come from unless you impose it ad hoc, say in the form of "skilled" and "unskilled" workers.

But there's an alternative, they say. If we think of the economy as a complex (pdf) adaptive system - as writers such as Eric Beinhocker, Cars Hommes and Brian Arthur suggest - then inequality becomes a central feature. This is partly because such evolutionary processes inherently generate winners and losers, and partly because they ditch representative agents and so introduce lumpy granularity. ...

This ... new paper by Pablo Torija ... shows how, since the 1980s, western politicians have stopped maximizing the well-being of the median voter, and have instead served the richest few per cent. If the economy is an adaptive ecosystem, it is one in which a few predators are winning at the expense of the prey.

Saturday, March 16, 2013

'A Profession With an Egalitarian Core'

Tyler Cowen:
A Profession With an Egalitarian Core: ...A distressingly large portion of the debate in many countries analyzes the effects of higher immigration on domestic citizens alone and seeks to restrict immigration to protect a national culture or existing economic interests. The obvious but too-often-underemphasized reality is that ... immigration could create tens of trillions of dollars in economic value, as captured by the migrants themselves in the form of higher wages in their new countries and by those who hire the migrants or consume the products of their labor. ...
In any case, there is an overriding moral issue. Imagine that it is your professional duty to report a cost-benefit analysis of liberalizing immigration policy. You wouldn’t dream of producing a study that counted “men only” or “whites only,” at least not without specific, clearly stated reasons for dividing the data.

So why report cost-benefit results only for United States citizens or residents, as is sometimes done in analyses of both international trade and migration? The nation-state is a good practical institution, but it does not provide the final moral delineation of which people count and which do not. ...

Economics evolved as a more moral and more egalitarian approach to policy than prevailed in its surrounding milieu. Let’s cherish and extend that heritage. The real contributions of economics to human welfare might turn out to be very different from what most people — even most economists — expect.

I can understand why it might have been advantageous from an evolutionary perspective for nature to make us care most about those who are closest to us.

It seems like there are two ways to get beyond this tribalism. The first is to expand the definition of the tribe to include everyone. According to the column, and to economic theory and evidence more generally, open borders don't just benefit immigrants, they help everyone. Since immigrants benefit us all, the definition of the tribe should be expanded to include them. The second, which is also in the column, is to argue it's a moral issue. We can and should grow beyond the tribal instincts that lead to war and other problems, forget about borders of all types as a distinction for measuring costs and benefits of immigration, and treat everyone the same.

Under the first approach, we care about people based upon what they can do to help us. Under the second, which I prefer, we care about people simply because they are people.

Tuesday, March 05, 2013

'Are Sticky Prices Costly? Evidence From The Stock Market'

There has been a debate in macroeconomics over whether sticky prices -- the key feature of New Keynesian models -- are actually as sticky as assumed, and how large the costs associated with price stickiness actually are. This paper finds "evidence that sticky prices are indeed costly":

Are Sticky Prices Costly? Evidence From The Stock Market, by Yuriy Gorodnichenko and Michael Weber, NBER Working Paper No. 18860, February 2013 [open link]: We propose a simple framework to assess the costs of nominal price adjustment using stock market returns. We document that, after monetary policy announcements, the conditional volatility rises more for firms with stickier prices than for firms with more flexible prices. This differential reaction is economically large as well as strikingly robust to a broad array of checks. These results suggest that menu costs---broadly defined to include physical costs of price adjustment, informational frictions, etc.---are an important factor for nominal price rigidity. We also show that our empirical results qualitatively and, under plausible calibrations, quantitatively consistent with New Keynesian macroeconomic models where firms have heterogeneous price stickiness. Since our approach is valid for a wide variety of theoretical models and frictions preventing firms from price adjustment, we provide "model-free" evidence that sticky prices are indeed costly.

Tuesday, February 19, 2013

Ageing and Productivity in Economics

Daniel Hamermesh argues that innovation in economics is slowing, and that allows older economists to stay in the game longer than in the past:

Ageing and productivity: Economists and others, by Daniel S. Hamermesh, Vox EU: Is economics still a young person’s game? If not, what is changing? This column argues that although top-level economic research in the 1990s was very much a young person’s game, the last 15 years has been kinder to older economists. More and more economists over 50 are being published in the top journals. Why? Because technological change in economic research is slowing, giving young researchers less competitive edge. ...

There's an alternative explanation. Older economists have more power over journals and other key research outlets than they used to, and they have kept the topics they work on alive much longer than in the past.

As for the thesis about technological change, it may be true about micro, though even there I'm not sure -- Varian does differ from Mas-Colell -- but what students in macro learn today is very different, both in technique and content, from what they learned 50 years ago. Same for some aspects of econometrics, e.g. the rise of the Bayesians. But if you take a shorter horizon in macro, since the rise of DSGE as a technique, and consider the forces for change that ought to exist in macro presently, it's harder to disagree with the stagnation thesis Hammermesh puts forward (which is mainly over the last 15 years). I think it's the hold that some of the "older" macroeconomists still have over the journals, NBER meetings, and the like -- that allows them to steer the theoretical agenda. But it's mostly just an hypothesis, I don't really have any hard evidence to back it up and I'm not all that confident it's correct. Maybe it's just that nothing better has come along.

In any case, here's a hint he's mostly thinking about micro:

In no way should the implied slowdown in methodological advance be viewed as negative for the profession as a whole. For the role of economics in society, the question is whether the profession is keeping up with the problems of an evolving complex society, not how it solves them. While one might despair of our progress in understanding issues and offering solutions for macroeconomic difficulties, the remarkable advances in the application of microeconomic ideas to real-world problems should be reassuring.

The "implied slowdown in methodological advance" might be okay for microeconomists, and for the "profession as a whole" if micro carries the most weight, but innovation -- perhaps aided by a change in the power structure within the profession -- is surely needed in macro.

Big Data?

Paul Krugman:

Data, Stimulus, and Human Nature, by Paul Krugman: David Brooks writes about the limitations of Big Data, and makes some good points. But he goes astray, I think, when he touches on a subject near and dear to my own data-driven heart:

For example, we’ve had huge debates over the best economic stimulus, with mountains of data, and as far as I know not a single major player in this debate has been persuaded by data to switch sides.

Actually, he’s not quite right there, as I’ll explain in a minute. But it’s certainly true that neither stimulus advocates nor hard-line stimulus opponents have changed their positions. The question is, does this say something about the limits of data — or is it just a commentary on human nature, especially in a highly politicized environment?

For the truth is that there were some clear and very different predictions from each side of the debate... On these predictions, the data have spoken clearly; the problem is that people don’t want to hear..., and the fact that they don’t happen has nothing to do with the limitations of data. ...

That said, if you look at players in the macro debate who would not face huge personal and/or political penalties for admitting that they were wrong, you actually do see data having a considerable impact. Most notably, the IMF has responded to the actual experience of austerity by conceding that it was probably underestimating fiscal multipliers by a factor of about 3.

So yes, it has been disappointing to see so many people sticking to their positions on fiscal policy despite overwhelming evidence that those positions are wrong. But the fault lies not in our data, but in ourselves.

I'll just add that when it comes to the debate over the multiplier and the macroeconomic data used to try to settle the question, the term "Big Data" doesn't really apply. If we actually had "Big Data," we might be able to get somewhere but as it stands -- with so little data and so few relevant historical episodes with similar policies -- precise answers are difficult to ascertain. And it's even worse than that. Let me point to something David Card said in an interview I posted yesterday:

I think many people are concerned that much of the research they see is biased and has a specific agenda in mind. Some of that concern arises because of the open-ended nature of economic research. To get results, people often have to make assumptions or tweak the data a little bit here or there, and if somebody has an agenda, they can inevitably push the results in one direction or another. Given that, I think that people have a legitimate concern about researchers who are essentially conducting advocacy work.

If we had the "Big Data" we need to answer these questions, this would be less of a problem. But with quarterly data from 1960 (when money data starts, you can go back to 1947 otherwise), or since 1982 (to avoid big structural changes and changes in Fed operating procedures), or even monthly data (if you don't need variables like GDP), there isn't as much precision as needed to resolve these questions (50 years of quarterly data is only 200 observations). There is also a lot of freedom to steer the results in a particular direction and we have to rely upon the integrity of researchers to avoid pushing a particular agenda. Most play it straight up, the answers are however they come out, but there are enough voices with agendas -- particularly, though not excusively, from think tanks, etc. -- to cloud the issues and make it difficult for the public to separate the honest work from the agenda based, one-sided, sometimes dishonest presentations. And there are also the issues noted above about people sticking to their positions, in their view honestly even if it is the result of data-mining, changing assumptions until the results come out "right," etc., because the data doesn't provide enough clarity to force them to give up their beliefs (in which they've invested considerable effort).

So I wish we had "Big Data," and not just a longer time-series of macro data, it would also be useful to re-run the economy hundreds or thousands of times, and evaluate monetary and fiscal policies across these experiments. With just one run of the economy, you can't always be sure that the uptick you see in historical data after, say, a tax cut is from the treatment, or just randomness (or driven by something else). With many, many runs of the economy that can be sorted out (cross-country comparisons can help, but the all else equal part is never satisfied making the comparisons suspect).

Despite a few research attempts such as the billion price project, "Little Data" and all the problems that come with it is a better description of empirical macroeconomics.