Wednesday, September 21, 2016
Monday, September 19, 2016
How to Build a Better Macroeconomics: Methodology Specifics I want to follow up on my comments about Paul Romer’s interesting recent piece by being more precise about how I believe macroeconomic research could be improved.
Macro papers typically proceed as follows:
- Question stated.
- Some reduced form analysis to "motivate" the question/answer.
- Question inputted into model. Model is a close variant of prior models grounded in four or five 1980s frameworks. The variant is generally based on introspection combined with some calibration of relevant parameters.
- Answer reported.
The problem is that the prior models have a host of key behavioral assumptions that have little or no empirical grounding. In this pdf, I describe one such behavioral assumption in some detail: the response of current consumption to persistent interest rate changes.
But there are many other such assumptions embedded in our models. For example, most macroeconomists study questions that depend crucially on how agents form expectations about the future. However, relatively few papers use evidence of any kind to inform their modeling of expectations formation. (And no, it’s not enough to say that Tom Sargent studied the consequences of one particular kind of learning in the late 1980s!) The point is that if your paper poses a question that depends on how agents form expectations, you should provide evidence from experimental or micro-econometric sources to justify your approach to expectation formation in your particular context.
So, I suggest the following would be a better approach:
- Thorough theoretical analysis of key mechanisms/responses that are likely to inform answer to question (perhaps via "toy" models?)
- Find evidence for ranges of magnitudes of relevant mechanisms/responses.
- Build and evaluate a range of models informed by this evidence. (Identification limitations are likely to mean that, given available data, there is a range of models that will be relevant in addressing most questions.)
- Range of answers to (1), given (4).
Should all this be done in one paper? Probably not. I suspect that we need a more collaborative approach to our questions - a team works on (2), another team works on (3), a third team works on (4) and we arrive at (5). I could readily see each step as being viewed as valuable contributions to economic science.
In terms of (3) - evidence - our micro colleagues can be a great source on this dimension. In my view, the most useful labor supply paper for macroeconomists in the past thirty years is this one - and it’s not written by a macroeconomist.
(If people know of existing papers that follow this approach, feel free to email me a reference at firstname.lastname@example.org.)
None of these ideas are original to me. They were actually exposited nearly forty years ago.The central idea is that individual responses can be documented relatively cheaply, occasionally by direct experimentation, but more commonly by means of the vast number of well-documented instances of individual reactions to well-specified environmental changes made available "naturally" via censuses, panels, other surveys, and the (inappropriately maligned as "casual empiricism") method of keeping one's eyes open.
I’m not totally on board with the author in what he says here. I'm a lot less enthralled by the value of “casual empiricism” in a world in which most macroeconomists mainly spend their time with other economists, but otherwise agree wholeheartedly with these words. And I probably see more of a role for direct experimentation than the author does. But those are both quibbles.
And these words from the same article seem even more apt:Researchers … will appreciate the extent to which … [this agenda] describes hopes for the future, not past accomplishments. These hopes might, without strain, be described as hopes for a kind of unification, not dissimilar in spirit from the hope for unification which informed the neoclassical synthesis. What I have tried to do above is to stress the empirical (as opposed to the aesthetic) character of these hopes, to try to understand how such quantitative evidence about behavior as we may reasonably expect to obtain in society as it now exists might conceivably be transformed into quantitative information about the behavior of imagined societies, different in important ways from any which have ever existed. This may seem an intimidatingly ambitious way to state the goal of an applied subfield of a marginally respectable science, but is there a less ambitious way of describing the goal of business cycle theory?
Somehow, macroeconomists have gotten derailed from this vision of a micro-founded unification and retreated into a hermetically sealed world, where past papers rely on prior papers' flawed foundations. We need to get back to the ambitious agenda that Robert Lucas put before us so many years ago.
(I admit that I'm cherry-picking like crazy from Lucas' 1980 classic JMCB piece. For example, one of Lucas' main points in that article was that he distrusted disequilibrium modeling approaches because they gave rise to too many free parameters. I don't find that argument all that relevant in 2016 - I think that we know more now than in 1980 about how micro-data can be fruitfully used to discipline our modeling of firm behavior. And I would suspect that Lucas would be less than fully supportive of what I write about expectations - but I still think I'm right!)
Thursday, September 15, 2016
Dave Elder-Vass at Understanding Society:
Guest post by Dave Elder-Vass: [Dave Elder-Vass accepted my invitation to write a response to my discussion of his recent book, Profit and Gift in the Digital Economy (link). Elder-Vass is Reader in sociology at Loughborough University and author as well of The Causal Power of Social Structures: Emergence, Structure and Agency and The Reality of Social Construction, discussed here and here. Dave has emerged as a leading voice in the philosophy of social science, especially in the context of continuing developments in the theory of critical realism. Thanks, Dave!]
We need to move on from existing theories of the economy
Let me begin by thanking Dan Little for his very perceptive review of my book Profit and Gift in the Digital Economy. As he rightly says, it’s more ambitious than the title might suggest, proposing that we should see our economy not simply as a capitalist market system but as a collection of “many distinct but interconnected practices”. Neither the traditional economist’s focus on firms in markets nor the Marxist political economist’s focus on exploitation of wage labour by capital is a viable way of understanding the real economy, and the book takes some steps towards an alternative view.
Both of those perspectives have come to narrow our view of the economy in multiple dimensions. Our very concept of the economy has been derived from the tradition that began as political economy with Ricardo and Smith then divided into the Marxist and neoclassical traditions (of course there are also others, but they are less influential). Although these conflict radically in some respects they also share some problematic assumptions, and in particular the assumption that the contemporary economy is essentially a capitalist market economy, characterised by the production of commodities for sale by businesses employing labour and capital. As Gibson-Graham argued brilliantly in their book The End Of Capitalism (As We Knew It): A Feminist Critique of Political Economy, ideas seep into the ways in which we frame the world, and when the dominant ideas and the main challengers agree on a particular framing of the world it is particularly difficult for us to think outside of the resulting box. In this case, the consequence is that even critics find it difficult to avoid thinking of the economy in market-saturated terms.
The most striking problem that results from this (and one that Gibson-Graham also identified) is that we come to think that only this form of economy is really viable in our present circumstances. Alternatives are pie in the sky, utopian fantasies, which could never work, and so we must be content with some version of capitalism – until we become so disillusioned that we call for its complete overthrow, and assume that some vague label for a better system can be made real and worthwhile by whoever leads the charge on the Bastille. But we need not go down either of these paths once we recognise that the dominant discourses are wrong about the economy we already have.
To see that, we need to start defining the economy in functional terms: economic practices are those that produce and transfer things that people need, whether or not they are bought and sold. As soon as we do that, it becomes apparent that we are surrounded by non-market economic practices already. The book highlights digital gifts – all those web pages that we load without payment, Wikipedia’s free encyclopaedia pages, and open source software, for example. But in some respects these pale into insignificance next to the household and family economy, in which we constantly produce things for each other and transfer them without payment. Charities, volunteering and in many jurisdictions the donation of blood and organs are other examples.
If we are already surrounded by such practices, and if they are proliferating in the most dynamic new areas of our economy, the idea that they are unworkably utopian becomes rather ridiculous. We can then start to ask questions about what forms of organising are more desirable ethically. Here the dominant traditions are equally warped. Each has a standard argument that is trotted out at every opportunity to answer ethical questions, but in reality both standard arguments operate as means of suppressing ethical discussions about economic questions. And both are derived from an extraordinarily narrow theory of how the economy works.
For the mainstream tradition, there is one central mechanism in the economy: price equilibration in the markets, a process in which prices rise and fall to bring demand and supply into balance. If we add on an enormous list of tenuous assumptions (which economists generally admit are unjustified, and then continue to use anyway), this leads to the theory of Pareto optimality of market outcomes: the argument that if we used some other system for allocating economic benefits some people would necessarily be worse off. This in turn becomes the central justification for leaving allocation to the market (and eliminating ‘interference’ with the market).
There are many reasons why this argument is flawed. Let me mention just one. If even one market is not perfectly competitive, but instead is dominated by a monopolist or partial monopolist, then even by the standards of economists a market system does not deliver Pareto optimality, and an alternative system might be more efficient. And in practice capitalists constantly strive to create monopolies, and frequently succeed! Even the Financial Times recognises this: in today’s issue (Sep 15 2016) Philip Stevens argues, “Once in a while capitalism has to be rescued from the depredations of, well, capitalists. Unconstrained, enterprise curdles into monopoly, innovation into rent-seeking. Today’s swashbuckling “disrupters” set up tomorrow’s cosy cartels. Capitalism works when someone enforces competition; and successful capitalists do not much like competition”.
So the argument for Pareto optimality of real market systems is patently false, but it continues to be trotted out constantly. It is presented as if it provides an ethical justification for the market economy, but its real function is to suppress discussion of economic ethics: if the market is inherently good for everyone then, it seems, we don’t need to worry about the ethics of who gets what any more.
The Marxist tradition likewise sees one central mechanism in the economy: the extraction of surplus from wage labour by capitalists. Their analysis of this mechanism depends on the labour theory of value, which is no more tenable that mainstream theories of Pareto optimality (for reasons I discuss in the book). Marxists consistently argue as if any such extraction is ethically reprehensible. Marx himself never provides an ethical justification for such a view. On the contrary, he claims that this is a scientific argument and disowns any ethical intent. Yet it functions in just the same way as the argument for Pareto optimality: instead of encouraging ethical debate about who should get what in the economy, Marxists reduce economic ethics to the single question of the need to prevent exploitation (narrowly conceived) of productive workers.
We need to sweep away both of these apologetics, and recognise that questions of who gets what are ethical issues that are fundamental to justice, legitimacy, and political progress in contemporary societies. And that they are questions that don’t have easy ‘one argument fits all’ answers. To make progress on them we will have to make arguments about what people need and deserve that recognise the complexity of their social situations. But it doesn’t take a great deal of ethical sophistication to recognise that the 1% have too much when many in the lower deciles are seriously impoverished, and that the forms of impoverishment extend well beyond underpaying for productive labour.
I’m afraid that I have written much more than I intended to, and still said very little about the steps I’ve taken in the book towards a more open and plausible way of theorising how the economy works. I hope that I’ve at least added some more depth to the reasons Dan picked out for attempting that task.
Monday, September 05, 2016
Sociologists on economics. This is by Daniel Little:
Capitalism as a heterogeneous set of practices: A key part of understanding society is making sense of the "economy" in which we live. But what is an economy? Existing economic theories attempt to answer this question with simple unified theories. The economy is a profit-driven market system of firms, workers, and consumers. The economy is a property system dependent upon the expropriation of surplus labor. The economy is a system of expropriation more or less designed to create great inequalities of income, wealth, and well-being. The economy is a system of exploitation and domination.
In Profit and Gift in the Digital Economy Dave Elder-Vass argues that these simple theories, largely the product of the nineteenth century, are flawed in several fundamental ways. First, they are all simple and unitary in a heterogeneous world. Economic transactions take a very wide variety of forms in the modern world. But more fundamentally, these existing theories fail completely to provide a theoretical vocabulary for describing what are now enormously important parts of our economic lives. One well-know blindspot is the domestic economy -- work and consumption within the household. But even more striking is the inadequacy of existing economic theories to make sense of the new digital world -- Google, Apple, Wikipedia, blogging, or YouTube. Elder-Vass's current book offers a new way of thinking about our economic lives and institutions. And he believes that this new way lays a basis for more productive thinking about a more humane future for all of us than is offered by either neoliberalism or Marxism.
What E-V offers is the idea of economic life as a jumble of "appropriative" practices -- practices that get organized and deployed in different combinations, and that have better and worse implications for human well-being.From this perspective it becomes possible to see our economy as a complex ecosystem of competing and interacting economic forms, each with their own strengths and weaknesses, and to develop a progressive politics that seems to reshape that ecosystem rather than pursuing the imaginary perfection of one single universal economic form. (5)The argument here is that we can understand the economy better by seeing it as a diverse collection of economic forms, each of which can be characterised as a particular complex of appropriative practices -- social practices that influence the allocation of benefits from the process of production. (9)Economies are not monoliths but diverse mixtures of varying economic forms. To understand and evaluate economic phenomena, then, we need to be able to describe and analyse these varying forms in conjunction with each other. (96)
Capitalism is not a single, unitary "mode of production," but rather a concatenation of multiple forms and practices. E-V believes that the positions offered here align well with the theories of critical realism that he has helped to elaborate in earlier books (19-20) (link, link). We can be realist in our investigations of the causal properties of the economic practices he identifies.
This way of thinking about economic life is very consistent with several streams of thought in Understanding Society -- the idea of social heterogeneity (link), the idea of assemblage (link), and a background mistrust of comprehensive social theories (link). (Here is an earlier post on "Capitalism 2.0" that is also relevant to the perspective and issues Elder-Vass brings forward; link.)
The central new element in contemporary economic life that needs treatment by an adequate political economy is the role that large digital enterprises play in the contemporary world. These enterprises deal in intangible products; they often involve a vast proportion of algorithmic transformation rather than human labor; and to a degree unprecedented in economic history, they depend on "gift" transactions at every level. Internet companies like Google give free search and maps, and bloggers and videographers give free content. And yet these gifts have none of the attributes of traditional gift communities -- there is no community, no explicit reciprocity, and little face-to-face interaction. E-V goes into substantial detail on several of these new types of enterprises, and does the work of identifying the "economic practices" upon which they depend.
In particular, E-V considers whether the gift relation familiar from anthropologists like Marcel Mauss and economic sociologists like Karl Polanyi can shed useful light on the digital economy. But the lack of reciprocity and face-to-face community leads him to conclude that the theory is unpersuasive as a way of understanding the digital economy (86).
It is noteworthy that E-V's description of appropriative practices is primarily allocative; it pays little attention to the organization of production. It is about "who receives the benefits" (10) but not so much about "how activity and labor are coordinated, managed, and deployed to produce the stuff". Marx gained the greatest insights in Capital, not from the simple mathematics of the labor theory of value, but from his investigations of the conditions of work and the schemes of management to which labor was subject in the nineteenth-century factory. The ideas of alienation, domination, and exploitation are very easy to understand in that context. But it would seem that there are similar questions to ask about the digital economy shops of today. The New York Times' reportage of working conditions within the Amazon organization seems to reveal a very similar logic (link). And how about the high-tech sweat shops described in a 2009 Bloomberg investigation (link)?
Elder-Vass believes that a better understanding of our existing economic practices can give rise to a more effective set of strategies for creating a better future. E-V's vision for creating a better future depends on a selective pruning of the more destructive practices and cultivation of the more positive practices. He is appreciative of the "real utopias" project (36) (link) and also of the World Social Forum.This means growing some progressive alternatives but also cutting back some regressive ones. It entails being open to a wide range of alternatives, including the possibility that there might be some valuable continuing role for some forms of capitalism in a more adequate mixed economy of practices. (15)
Or in other words, E-V advocates for innovative social change -- recognizing the potential in new forms and cultivating existing forms of economic activity. Marxism has been the impetus of much thinking about progressive change in the past century; but E-V argues that this perspective too is limited:Marxism itself has become an obstacle to thinking creatively about the economy, not least because it is complicit in the discourse of the monolithic capitalist market economy that we must now move beyond.... Marx's labour theory of value ... tends to support the obsessive identification of capitalism with wage labour. As a consequence Marxists have failed to recognise that capitalism has developed new forms of making profit that do not fit with the classic Marxist model, including many that have emerged and prospered in the new digital economy. (45)
This is not a wholesale rejection of Marx's thought; but it is a well-justified critique of the lingering dogmatism of this tradition. Though E-V does not make reference to current British politics in the book, these comments seem very appropriate in appraisal of the approach to change championed by Labour leader Jeremy Corbyn.
E-V shows a remarkable range of expertise in this work. His command of recent Marxian thinking about contemporary capitalism is deep. But he has also gone deeply into the actual practices of the digital economy -- the ways Google makes profits, the incentives and regulations that sustain wikipedia, the handful of distinctive business practices that have made Apple one of the world's largest companies. The book is a work of theory and a work of empirical investigation as well.
Profit and Gift in the Digital Economy is a book with a big and important idea -- bigger really than the title implies. The book demands a substantial shift in the way that economists think about the institutions and practices through which the global economy works. More fundamentally, it asks that we reconsider the idea of "economy" altogether, and abandon the notion that there is a single unitary economic practice or institution that defines modern capitalism -- whether market, wage labor, or trading system. Instead, we should focus on the many distinct but interconnected practices that have been invented and stitched together in the many parts of society to solve particular problems of production, consumption, and appropriation, and that as an aggregate make up "the economy". The economy is an assemblage, not a designed system, and reforming this agglomeration requires shifting the "ecosystem" of practices in a direction more favorable to human flourishing.
Sunday, September 04, 2016
Telling macro stories with micro: Don't let the equations, data, or jargon fool you, economists are avid storytellers. Our "stories" may not fit neatly in the seven universal plots but after awhile it's easy to spot some patterns. A good story paper in economics, according to David Romer, has three characteristics: a viewpoint, a lever, and a result.
Most blog or media coverage of an economics paper focuses on the result. Makes sense given the audience but buyer beware. Economists dissecting a paper spend more time on the lever, the how-did-they-get-the-result part. And coming up with new levers is a big chunk of research. The viewpoint--the underlying assumptions, the what's-central-to-the-story--tends to get short shrift. Of course, the viewpoint matters (often that's what defines a a story as economics), but it usually holds across many papers. Best to focus the new stuff.
Except when the viewpoint comes under scrutiny, then the stories can really change. ...
How much does micro matter for macro?
One long-standing viewpoint in economics is that changes in the macro-economy can largely be understood by studying changes in macro aggregates. Ironically, this viewpoint even survived macro's push to micro foundations with a "representative agent" stepping in as the missing link between aggregate data and micro theory. As a macro forecaster, I understand the value of the aggregates-only simplification. As an applied micro researcher, I am pretty sure it fails us from time to time. Thankfully, an ever-growing body of research and commentary is helping to identify times when differences at the micro level are relevant for macro outcomes. This is not new--issues of aggregation in macro go waaay back--but our levers, with rich, timely micro data and high-powered computation, are improving rapidly.
I focus in this post on differences in household behavior, particularly related to consumer spending, since that's the area I know best. And I want to discuss results from an ambitious new paper: "Macroeconomics and Household Heterogeneity" by Krueger, Mitman, and Perri. tldr: I am skeptical of their results, above all, the empirics, but I really like what they are trying to do, to shift the macro viewpoint. More on this paper below, but also want to set it in the context of macro storytelling. ...
There's quite a bit more.
Monday, August 29, 2016
Complexity and Economic Policy, OECD Insights: ...Economic theory has... developed increasingly sophisticated models to justify the contention that individuals left to their own devices will self organise into a socially desirable state. However, in so doing, it has led us to a view of the economic system that is at odds with what has been happening in many other disciplines.
Although in fields such as statistical physics, ecology and social psychology it is now widely accepted that systems of interacting individuals will not have the sort of behaviour that corresponds to that of one average or typical particle or individual, this has not had much effect on economics. Whilst those disciplines moved on to study the emergence of non-linear dynamics as a result of the complex interaction between individuals, economists relentlessly insisted on basing their analysis on that of rational optimising individuals behaving as if they were acting in isolation. ...
Yet this paradigm is neither validated by empirical evidence nor does it have sound theoretical foundations. It has become an assumption. ...
As soon as one considers the economy as a complex adaptive system in which the aggregate behaviour emerges from the interaction between its components, no simple relation between the individual participant and the aggregate can be established. Because of all the interactions and the complicated feedbacks between the actions of the individuals and the behaviour of the system there will inevitably be “unforeseen consequences” of the actions taken by individuals, firms and governments. Not only the individuals themselves but the network that links them changes over time. The evolution of such systems is intrinsically difficult to predict, and for policymakers this means that assertions such as “this measure will cause that outcome” have to be replaced with “a number of outcomes are possible and our best estimates of the probabilities of those outcomes at the current point are…”. ...
...in trying to stabilise such systems it is an error to focus on one variable either to control the system or to inform us about its evolution. Single variables such as the interest rate do not permit sufficient flexibility for policy actions and single performance measures such as the unemployment rate or GDP convey too little information about the state of the economy.
Tuesday, August 16, 2016
A relatively long article by Raphaële Chappe at INET:
General Equilibrium Theory: Sound and Fury, Signifying Nothing?: Does general equilibrium theory sufficiently enhance our understanding of the economic process to make the entire exercise worthwhile, if we consider that other forms of thinking may have been ‘crowded out’ as a result of its being the ‘dominant discourse’? What, in the end, have we really learned from it? ...
Monday, August 15, 2016
Here is what I like and have found most useful about Dynamic Stochastic General Equilibrium (DSGE) models, also known as New Keynesian (NK), models. The original NK models were low dimensional – the simplest version reduces to a 3-equation model, while DSGE models are now typically much more elaborate. What I find attractive about these models can be stated in terms of the basic NK/DSGE model.
First, because it is a carefully developed, micro- founded model incorporating price frictions, the NK model makes it possible to incorporate in a disciplined way the various additional sectors, distortions, adjustment costs, and parametric detail found in many NK/DSGE models. Theoretically this is much more attractive than starting with a reduced form IS-LM model and adding features in an ad hoc way. (At the same time I still find ad hoc models useful, especially for teaching and informal policy analysis, and the IS-LM model is part of the macroeconomics cannon).
Second, and this is particularly important for my own research, the NK model makes explicit and gives a central role to expectations about future economic variables. The standard linearized three-equation NK model in output, inflation and interest rates has current output and inflation depending in a specified way on expected future output and inflation. The dependence of output on expected future output and future inflation comes through the household dynamic optimization condition, and the dependence of inflation on expected future inflation arises from the firm’s optimal pricing equation. The NK model thus places expectations of future economic variables front and center, and does so in a disciplined way.
Third, while the NK model is typically solved under rational expectations (RE), it can also be viewed as providing the temporary equilibrium framework for studying the system under relaxations of the RE hypothesis. I particularly favor replacing RE with boundedly rational adaptive learning and decision-making (AL). Incorporating AL is especially fruitful in cases where there are multiple RE solutions, and AL brings out many Keynesian features of the NK model that extend IS-LM. In general I have found micro-founded macro models of all types to be ideal for incorporating bounded rationality, which is most naturally formulated at the agent level.
Fourth, while the profession as a whole seemed to many of us slow to appreciate the implications of the NK model for policy during and following the financial crisis, this was not because the NK model was intrinsically defective (the neglect of financial frictions by most, though not all DSGE modelers, was also a deficiency in most textbook IS-LM models). This was really, I think, because many macro economists using NK models in 2007-8 did not fully appreciate the Keynesian mechanisms present in the model.
However, many of us were alert to the NK model fiscal policy implications during the crisis. For example, in Evans, Guse and Honkapohja (“Liquidity traps, learning and stagnation,” 2008, European Economic Review), using an NK model with multiple RE solutions because of the liquidity trap, we showed, using the AL approach to expectations, that when there is a very large negative expectations shock, fiscal as well as monetary stimulus may be needed, and indeed a temporary fiscal stimulus that is large enough and early enough may be critical for avoiding a severe recession or depression. Of course such an argument could have been made using extensions of the ad hoc IS-LM model, but my point is that this policy implication was ready to be found in the NK model, and the key results center on the primacy of expectations.
Finally, it should go without saying that NK/DSGE modeling should not be the one and only style. Most graduate-level core macro courses teach a wide range of macro models, and I see a diversity of innovations at the research frontier that will continue to keep macroeconomics vibrant and relevant.
Sunday, June 19, 2016
The New Keynesian model is fairly pliable, and adding bells and whistles can help it to explain most of the data we see, at least after the fact. Does that mean we should be more confident in it its ability to "embody any useful principle," or less?:
... A famous example of different pictures of reality is the model introduced around AD 150 by Ptolemy (ca. 85—ca. 165) to describe the motion of the celestial bodies. ... In Ptolemy’s model the earth stood still at the center and the planets and the stars moved around it in complicated orbits involving epicycles, like wheels on wheels. ...
It was not until 1543 that an alternative model was put forward by Copernicus... Copernicus, like Aristarchus some seventeen centuries earlier, described a world in which the sun was at rest and the planets revolved around it in circular orbits. ...
So which is real, the Ptolemaic or Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true..., our observations of the heavens can be explained by assuming either the earth or the sun to be at rest. Despite its role in philosophical debates over the nature of our universe, the real advantage of the Copernican system is simply that the equations of motion are much simpler in the frame of reference in which the sun is at rest.... Elegance ... is not something easily measured, but it is highly prized among scientists because laws of nature are meant to economically compress a number of particular cases into one simple formula. Elegance refers to the form of a theory, but it is closely related to a lack of adjustable elements, since a theory jammed with fudge factors is not very elegant. To paraphrase Einstein, a theory should be as simple as possible, but not simpler. Ptolemy added epicycles to the circular orbits of the heavenly bodies in order that his model might accurately describe their motion. The model could have been made more accurate by adding epicycles to the epicycles, or even epicycles to those. Though added complexity could make the model more accurate, scientists view a model that is contorted to match a specific set of observations as unsatisfying, more of a catalog of data than a theory likely to embody any useful principle. ...
[S]cientists are always impressed when new and stunning predictions prove correct. On the other hand, when a model is found lacking, a common reaction is to say the experiment was wrong. If that doesn’t prove to be the case, people still often don’t abandon the model but instead attempt to save it through modifications. Although physicists are indeed tenacious in their attempts to rescue theories they admire, the tendency to modify a theory fades to the degree that the alterations become artificial or cumbersome, and therefore “inelegant.” If the modifications needed to accommodate new observations become too baroque, it signals the need for a new model. ...
[Hawking, Stephen; Mlodinow, Leonard (2010-09-07). The Grand Design. Random House, Inc.. Kindle Edition.]
Wednesday, January 13, 2016
Is mainstream academic macroeconomics eclectic?: For economists, and those interested in macroeconomics as a discipline
Eric Lonergan has a short little post that is well worth reading..., it makes an important point in a clear and simple way that cuts through a lot of the nonsense written on macroeconomics nowadays. The big models/schools of thought are not right or wrong, they are just more or less applicable to different situations. You need New Keynesian models in recessions, but Real Business Cycle models may describe some inflation free booms. You need Minsky in a financial crisis, and in order to prevent the next one. As Dani Rodrik says, there are many models, and the key questions are about their applicability.
If we take that as given, the question I want to ask is whether current mainstream academic macroeconomics is also eclectic. ... My answer is yes and no.
Let’s take the five ‘schools’ that Eric talks about. ... Indeed the variety of models that academic macro currently uses is far wider than this.
Does this mean academic macroeconomics is fragmented into lots of cliques, some big and some small? Not really... This is because these models (unlike those of 40+ years ago) use a common language. ...
It means that the range of assumptions that models (DSGE models if you like) can make is huge. There is nothing formally that says every model must contain perfectly competitive labour markets where the simple marginal product theory of distribution holds, or even where there is no involuntary unemployment, as some heterodox economists sometimes assert. Most of the time individuals in these models are optimising, but I know of papers in the top journals that incorporate some non-optimising agents into DSGE models. So there is no reason in principle why behavioural economics could not be incorporated. If too many academic models do appear otherwise, I think this reflects the sociology of macroeconomics and the history of macroeconomic thought more than anything (see below).
It also means that the range of issues that models (DSGE models) can address is also huge. ...
The common theme of the work I have talked about so far is that it is microfounded. Models are built up from individual behaviour.
You may have noted that I have so far missed out one of Eric’s schools: Marxian theory. What Eric want to point out here is clear in his first sentence. “Although economists are notorious for modelling individuals as self-interested, most macroeconomists ignore the likelihood that groups also act in their self-interest.” Here I think we do have to say that mainstream macro is not eclectic. Microfoundations is all about grounding macro behaviour in the aggregate of individual behaviour.
I have many posts where I argue that this non-eclecticism in terms of excluding non-microfounded work is deeply problematic. Not so much for an inability to handle Marxian theory (I plead agnosticism on that), but in excluding the investigation of other parts of the real macroeconomic world. ...
The confusion goes right back, as I will argue in a forthcoming paper, to the New Classical Counter Revolution of the 1970s and 1980s. That revolution, like most revolutions, was not eclectic! It was primarily a revolution about methodology, about arguing that all models should be microfounded, and in terms of mainstream macro it was completely successful. It also tried to link this to a revolution about policy, about overthrowing Keynesian economics, and this ultimately failed. But perhaps as a result, methodology and policy get confused. Mainstream academic macro is very eclectic in the range of policy questions it can address, and conclusions it can arrive at, but in terms of methodology it is quite the opposite.
Friday, January 08, 2016
Sunday, January 03, 2016
Musings on Whether We Consciously Know More or Less than What Is in Our Models…: Larry Summers presents as an example of his contention that we know more than is in our models–that our models are more a filing system, and more a way of efficiently conveying part of what we know, than they are an idea-generating mechanism–Paul Krugman’s Mundell-Fleming lecture, and its contention that floating exchange-rate countries that can borrow in their own currency should not fear capital flight in a utility trap. He points to Olivier Blanchard et al.’s empirical finding that capital outflows do indeed appear to be not expansionary but contractionary ...
[There's quite a bit more in Brad's post.]
Wednesday, December 30, 2015
I asked my colleagues George Evans and Bruce McGough if they would like to respond to a recent post by Simon Wren-Lewis, "Woodford’s reflexive equilibrium" approach to learning:
The neo-Fisherian view and the macro learning approach
George W. Evans and Bruce McGough
Economics Department, University of Oregon
December 30, 2015
Cochrane (2015) argues that low interest rates are deflationary — a view that is sometimes called neo-Fisherian. In this paper John Cochrane argues that raising the interest rate and pegging it at a higher level will raise the inflation rate in accordance with the Fisher equation, and works through the details of this in a New Keynesian model.
Garcia-Schmidt and Woodford (2015) argue that the neo-Fisherian claim is incorrect and that low interest rates are both expansionary and inflationary. In making this argument Mariana Garcia-Schmidt and Michael Woodford use an approach that has a lot of common ground with the macro learning literature, which focuses on how economic agents might come to form expectations, and in particular whether coordination on a particular rational expectations equilibrium (REE) is plausible. This literature examines the stability of an REE under learning and has found that interest-rate pegs of the type discussed by Cochrane lead to REE that are not stable under learning. Garcia-Schmidt and Woodford (2015) obtain an analogous instability result using a new bounded-rationality approach that provides specific predictions for monetary policy. There are novel methodological and policy results in the Garcia-Schmidt and Woodford (2015) paper. However, we will here focus on the common ground with other papers in the learning literature that also argue against the neo-Fisherian claim.
The macro learning literature posits that agents start with boundedly rational expectations e.g. based on possibly non-RE forecasting rules. These expectations are incorporated into a “temporary equilibrium” (TE) environment that yields the model’s endogenous outcomes. The TE environment has two essential components: a decision-theoretic framework which specifies the decisions made by agents (households, firms etc.) given their states (values of exogenous and pre-determined endogenous state variables) and expectations;1 and a market-clearing framework that coordinates the agents’ decisions and determines the values of the model’s endogenous variables. It is useful to observe that, taken together, the two components of the TE environment yield the “TE-map” that takes expectations and (aggregate and idiosyncratic) states to outcomes.
The adaptive learning framework, which is the most popular formulation of learning in macro, proceeds recursively. Agents revise their forecast rules in light of the data realized in the previous period, e.g. by updating their forecast rules econometrically. The exogenous shocks are then realized, expectations are formed, and a new temporary equilibrium results. The equilibrium path under learning is defined recursively. One can then study whether the economy under adaptive learning converges over time to the REE of interest.2
The essential point of the learning literature is that an REE, to be credible, needs an explanation for how economic agents come to coordinate on it. This point is acute in models in which there are multiple RE solutions, as can arise in a wide range of dynamic macro models. This has been an issue in particular in the New Keynesian model, but it also arises, for example, in overlapping generations models and in RBC models with distortions. The macro learning literature provides a theory for how agents might learn over time to forecast rationally, i.e. to come to have RE (rational expectations). The adaptive learning approach found that agents will over time come to have rational expectations (RE) by updating their econometric forecasting models provided the REE satisfies “expectational stability” (E-stability) conditions. If these conditions are not satisfied then convergence to the REE will not occur and hence it is implausible that agents would be able to coordinate on the REE. E-stability then also acts as a selection device in cases in which there are multiple REE.
The adaptive learning approach has the attractive feature that the degree of rationality of the agents is natural: though agents are boundedly rational, they are still fairly sophisticated, estimating and updating their forecasting models using statistical learning schemes. For a wide range of models this gives plausible results. For example, in the basic Muth cobweb model, the REE is learnable if supply and demand have their usual slopes; however, the REE, though still unique, is not learnable if the demand curve is upward sloping and steeper than the supply curve. In an overlapping generations model, Lucas (1986) used an adaptive learning scheme to show that though the overlapping generations model of money has multiple REE, learning dynamics converge to the monetary steady state, not to the autarky solution. Early analytical adaptive learning results were obtained in Bray and Savin (1986) and the formal framework was greatly extended in Marcet and Sargent (1989). The book by Evans and Honkapohja (2001) develops the E-stability principle and includes many applications. Many more applications of adaptive learning have been published over the last fifteen years.
There are other approaches to learning in macro that have a related theoretical motivation, e.g. the “eductive” approach of Guesnerie asks whether mental reasoning by hyper-rational agents, with common knowledge of the structure and of the rationality of other agents, will lead to coordination on an REE. A fair amount is known about the connections between the stability conditions of the alternative adaptive and eductive learning approaches.3 The Garcia-Schmidt and Woodford (2015) “reflective equilibrium” concept provides a new approach that draws on both the adaptive and eductive strands as well as on the “calculation equilibrium” learning model of Evans and Ramey (1992, 1995, 1998). These connections are outlined in Section 2 of Garcia-Schmidt and Woodford (2015).4
The key insight of these various learning approaches is that one cannot simply take RE (which in the nonstochastic case reduces to PF, i.e. perfect foresight) as given. An REE is an equilibrium that begs an explanation for how it can be attained. The various learning approaches rely on a temporary equilibrium framework, outlined above, which goes back to Hicks (1946). A big advantage of the TE framework, when developed at the agent level and aggregated, is that in conjunction with the learning model an explicit causal story can be developed for how the economy evolves over time.
The lack of a TE or learning framework in Cochrane (2011, 2015) is a critical omission. Cochrane (2009) criticized the Taylor principle in NK models as requiring implausible assumptions on what the Fed would do to enforce its desired equilibrium path; however, this view simply reflects the lack of a learning perspective. McCallum (2009) argued that for a monetary rule satisfying the Taylor principle the usual RE solution used by NK modelers is stable under adaptive learning, while the non-fundamental solution bubble solution is not. Cochrane (2009, 2011) claimed that these results hinged on the observability of shocks. In our paper “Observability and Equilibrium Selection,” Evans and McGough (2015b), we develop the theory of adaptive learning when fundamental shocks are unobservable, and then, as a central application, we consider the flexible-price NK model used by Cochrane and McCallum in their debate. We carefully develop this application using an agent-level temporary equilibrium approach and closing the model under adaptive learning. We find that if the Taylor principle is satisfied, then the usual solution is robustly stable under learning, while the non-fundamental price-level bubble solution is not. Adaptive learning thus operates as a selection criterion and it singles out the usual RE solution adopted by proponents of the NK model. Furthermore, when monetary policy does not obey the Taylor principle then neither of the solutions is robustly stable under learning; an interest-rate peg is an extreme form of such a policy, and the adaptive learning perspective cautions that this will lead to instability. We discuss this further below.
The agent-level/adaptive learning approach used in Evans and McGough (2015b) allows us to specifically address several points raised by Cochrane. He is concerned that there is no causal mechanism that pins down prices. The TE map provides this, in the usual way, through market clearing given expectations of future variables. Cochrane also states that the lack of a mechanism means that the NK paradigm requires that the policymakers be interpreted as threatening to “blow up” the economy if the standard solution is not selected by agents.5 This is not the case. As we say in our paper (p. 24-5), “inflation is determined in temporary equilibrium, based on expectations that are revised over time in response to observed data. Threats by the Fed are neither made nor needed ... [agents simply] make forecasts the same way that time-series econometricians typically forecast: by estimating least-squares projections of the variables being forecasted on the relevant observables.”
Let us now return to the issue of interest rate pegs and the impact of changing the level of an interest rate peg. The central adaptive learning result is that interest rate pegs give REE that are unstable under learning. This result was first given in Howitt (1992). A complementary result was given in Evans and Honkapohja (2003) for time-varying interest rate pegs designed to optimally respond to fundamental shocks. As discussed above, Evans and McGough (2015b) show that the instability result also obtains when the fundamental shocks are not observable and the Taylor principle is not satisfied. The economic intuition in the NK model is very strong and is essentially as follows. Suppose we are at an REE (or PFE) at a fixed interest rate and with expected inflation at the level dictated by the Fisher equation. Suppose that there is a small increase in expected inflation. With a fixed nominal interest rate this leads to a lower real interest rate, which increases aggregate demand and output. This in turn leads to higher inflation, which under adaptive learning leads to higher expected inflation, destabilizing the system. (The details of the evolution of expectations and the model dynamics depend, of course, on the precise decision rules and econometric forecasting model used by agents). In an analogous way, expected inflation slightly lower than the REE/PFE level leads to cumulatively lower levels of inflation, output and expected inflation.
Returning to the NK model, additional insight is obtained by considering a nonlinear NK model with a global Taylor rule that leads to two steady states. This model was studied by Benhabib, Schmidt-Grohe and Uribe in a series of papers, e.g. Benhabib, Schmitt-Grohe, and Uribe (2001), which show that with an interest-rate rule following the Taylor principle at the target inflation rate, the zero-lower bound (ZLB) on interest rates implies the existence of an unintended PFE low inflation or deflation steady state (and indeed a continuum of PFE paths to it) at which the Taylor principle does not hold (a special case of which is a local interest rate peg at the ZLB). From a PF/RE viewpoint these are all valid solutions. From the adaptive learning perspective, however, they differ in terms of stability. Evans, Guse, and Honkapohja (2008) and Benhabib, Evans, and Honkapohja (2014) show that the targeted steady state is locally stable under learning with a large basin of attraction, while the unintended low inflation/deflation steady state is not locally stable under learning: small deviations from it lead either back to the targeted steady state or into a deflation trap, in which inflation and output fall over time. From a learning viewpoint this deflation trap should be a major concern for policy.6,7
Finally, let us return to Cochrane (2015). Cochrane points out that at the ZLB peg there has been low but relatively steady (or gently declining) inflation in the US, rather than a serious deflationary spiral. This point echoes Jim Bullard’s concern in Bullard (2010) about the adaptive learning instability result: we effectively have an interest rate peg at the ZLB but we seem to have a fairly stable inflation rate, so does this indicate that the learning literature may here be on the wrong track?
This issue is addressed by Evans, Honkapohja, and Mitra (2015) (EHM2015). They first point out that from a policy viewpoint the major concern at the ZLB has not been low inflation or deflation per se. Instead it is its association with low levels of aggregate output, high levels of unemployment and a more general stagnation. However, the deflation steady state at the ZLB in the NK model has virtually the same level of aggregate output as the targeted steady state. The PFE at the ZLB interest rate peg is not a low level output equilibrium, and if we were in that equilibrium there would not be the concern that policy-makers have shown. (Temporary discount rate or credit market shocks of course can lead to recession at the ZLB but their low output effects vanish as soon as the shocks vanish).
In EHM2015 steady mild deflation is consistent with low output and stagnation at the ZLB.8 They note that many commentators have remarked that the behavior of the NK Phillips relation is different from standard theory at very low output levels. EHM2015 therefore imposes lower bounds on inflation and consumption, which can become relevant when agents become sufficiently pessimistic. If the inflation lower bound is below the unintended low steady state inflation rate, a third “stagnation” steady state is created at the ZLB. The stagnation steady state, like the targeted steady state is locally stable under learning, and arises under learning if output and inflation expectations are too pessimistic. A large temporary fiscal stimulus can dislodge the economy from the stagnation trap, and a smaller stimulus can be sufficient if applied earlier. Raising interest rates does not help in the stagnation state and at an early stage it can push the economy into the stagnation trap.
In summary, the learning approach argues forcefully against the neo- Fisherian view.
1With infinitely-lived agents there are several natural implementations of optimizing decision rules, including short-horizon Euler-equation or shadow-price learning approaches(see, e.g., Evans and Honkapohja (2006) and Evans and McGough (2015a)) and the anticipated utility or infinte-horizon approaches of Preston (2005) and Eusepi and Preston (2010).
2An additional advantage of using learning is that learning dynamics give expanded scope for fitting the data as well as explaining experimental findings.
3The TE map is the basis for the map at the core of any specified learning scheme, which in turn determines the associated stability conditions.
4There are also connections to both the infinite-horizon learning approach to anticipated policy developed in Evans, Honkapohja, and Mitra (2009) and the eductive stability framework in Evans, Guesnerie, and McGough (2015).
5This point is repeated in Section 6.4 of Cochrane (2015): “The main point: such models presume that the Fed induces instability in an otherwise stable economy, a non-credible off-equilibrium threat to hyperinflate the economy for all but one chosen equilibrium.”
6And the risk of sinking into deflation clearly has been a major concern for policymakers in the US, during and following both the 2001 recession and the 2007 - 2009 recession. It has remained a concern in Europe and Japan as well as in Japan during the 1990s.
7Experimnetal work with stylized NK economies has found that entering deflation traps is a real possibility. See Hommes and Salle (2015).
8See also Evans (2013) for a partial and less general version of this argument.
Benhabib, J., G. W. Evans, and S. Honkapohja (2014): “Liquidity Traps and Expectation Dynamics: Fiscal Stimulus or Fiscal Austerity?,” Journal of Economic Dynamics and Control, 45, 220—238.
Benhabib, J., S. Schmitt-Grohe, and M. Uribe (2001): “The Perils of Taylor Rules,” Journal of Economic Theory, 96, 40—69.
Bray, M., and N. Savin (1986): “Rational Expectations Equilibria, Learning, and Model Specification,” Econometrica, 54, 1129—1160.
Bullard, J. (2010): “Seven Faces of The Peril,” Federal Reserve Bank of St. Louis Review, 92, 339—352.
Cochrane, J. H. (2009): “Can Learnability Save New Keynesian Models?,” Journal of Monetary Economics, 56, 1109—1113.
_______ (2015): “Do Higher Interest Rates Raise or Lower Inflation?, "Working paper, University of Chicago Booth School of Business.
Dixon, H., and N. Rankin (eds.) (1995): The New Macroeconomics: Imperfect Markets and Policy Effectiveness. Cambridge University Press, Cambridge UK.
Eusepi, S., and B. Preston (2010): “Central Bank Communication and Expectations Stabilization,” American Economic Journal: Macroeconomics, 2, 235—271.
Evans, G.W. (2013): “The Stagnation Regime of the New KeynesianModel and Recent US Policy,” in Sargent and Vilmunen (2013), chap. 4.
Evans, G. W., R. Guesnerie, and B. McGough (2015): “Eductive Stability in Real Business Cycle Models,” mimeo.
Evans, G. W., E. Guse, and S. Honkapohja (2008): “Liquidity Traps, Learning and Stagnation,” European Economic Review, 52, 1438—1463.
Evans, G. W., and S. Honkapohja (2001): Learning and Expectations in Macroeconomics. Princeton University Press, Princeton, New Jersey.
_______ (2003): “Expectations and the Stability Problem for Optimal Monetary Policies,” Review of Economic Studies, 70, 807—824.
_______ (2006): “Monetary Policy, Expectations and Commitment,” Scandinavian Journal of Economics, 108, 15—38.
Evans, G. W., S. Honkapohja, and K. Mitra (2009): “Anticipated Fiscal Policy and Learning,” Journal of Monetary Economics, 56, 930— 953
_______ (2015): “Expectations, Stagnation and Fiscal Policy,” Working paper, University of Oregon.
Evans, G. W., and B. McGough (2015a): “Learning to Optimize,” mimeo, University of Oregon.
_______ (2015b): “Observability and Equilibrium Selection,” mimeo, University of Oregon.
Evans, G. W., and G. Ramey (1992): “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review, 82, 207—224.
_______ (1995): “Expectation Calculation, Hyperinflation and Currency Collapse,” in Dixon and Rankin (1995), chap. 15, pp. 307—336.
_______ (1998): “Calculation, Adaptation and Rational Expectations,” Macroeconomic Dynamics, 2, 156—182.
Garcia-Schmidt, M., and M. Woodford (2015): “Are Low Interest Rates Deflationary? A Paradox of Perfect Foresight Analysis,” Working paper, Columbia University.
Hicks, J. R. (1946): Value and Capital, Second edition. Oxford University Press, Oxford UK.
Hommes, Cars H., M. D., and I. Salle (2015): “Monetary and Fiscal Policy Design at the Zero Lower Bound: Evidence from the lab,” mimeo., CeNDEF, University of Amsterdam.
Howitt, P. (1992): “Interest Rate Control and Nonconvergence to Rational Expectations,” Journal of Political Economy, 100, 776—800.
Lucas, Jr., R. E. (1986): “Adaptive Behavior and Economic Theory,” Journal of Business, Supplement, 59, S401—S426.
Marcet, A., and T. J. Sargent (1989): “Convergence of Least-Squares Learning Mechanisms in Self-Referential Linear Stochastic Models,” Journal of Economic Theory, 48, 337—368.
McCallum, B. T. (2009): “Inflation Determination with Taylor Rules: Is New-Keynesian Analysis Critically Flawed?,” Journal of Monetary Economic Dynamics, 56, 1101—1108.
Preston, B. (2005): “Learning about Monetary Policy Rules when Long- Horizon Expectations Matter,” International Journal of Central Banking, 1, 81—126.
Sargent, T. J., and J. Vilmunen (eds.) (2013): Macroeconomics at the Service of Public Policy. Oxford University Press.
Monday, December 28, 2015
Karla Hoff and Joe Stiglitz:
Striving for Balance in Economics: Towards a Theory of the Social Determination of Behavior, by Karla Hoff, Joseph E. Stiglitz, NBER Working Paper No. 21823,Issued in December 2015: Abstract This paper is an attempt to broaden the standard economic discourse by importing insights into human behavior not just from psychology, but also from sociology and anthropology. Whereas the concept of the decision-maker is the rational actor in standard economics and, in early work in behavioral economics, the quasi-rational actor influenced by the context of the moment of decision-making, in some recent work in behavioral economics the decision-maker could be called the enculturated actor. This actor's preferences and cognition are subject to two deep social influences: (a) the social contexts to which he has become exposed and, especially accustomed; and (b) the cultural mental models—including categories, identities, narratives, and worldviews—that he uses to process information. We trace how these factors shape individual behavior through the endogenous determination of both preferences and the lenses through which individuals see the world—their perception, categorization, and interpretation of situations. We offer a tentative taxonomy of the social determinants of behavior and describe results of controlled and natural experiments that only a broader view of the social determinants of behavior can plausibly explain. The perspective suggests new tools to promote well-being and economic development. [Open Link]
A short recent article in the Journal of Artificial Societies and Social Simulation by Venturini, Jensen, and Latour lays out a critique of the explanatory strategy associated with agent-based modeling of complex social phenomena (link). (Thanks to Mark Carrigan for the reference via Twitter; @mark_carrigan.) Tommaso Venturini is an expert on digital media networks at Sciences Po (link), Pablo Jensen is a physicist who works on social simulations, and Bruno Latour is -- Bruno Latour. Readers who recall recent posts here on the strengths and weaknesses of ABM models as a basis for explaining social conflict will find the article interesting (link). VJ&L argue that agent-based models -- really, all simulations that proceed from the micro to the macro -- are both flawed and unnecessary. They are flawed because they unavoidable resort to assumptions about agents and their environments that reduce the complexity of social interaction to an unacceptable denominator; and they are unnecessary because it is now possible to trace directly the kinds of processes of social interaction that simulations are designed to model. The "big data" available concerning individual-to-individual interactions permits direct observation of most large social processes, they appear to hold.
Here are the key criticisms of ABM methodology that the authors advance:
- Most of them, however, partake of the same conceptual approach in which individuals are taken as discrete and interchangeable 'social atoms' (Buchanan 2007) out of which social structures emerge as macroscopic characteristics (viscosity, solidity...) emerge from atomic interactions in statistical physics (Bandini et al. 2009). (1.2)
- most simulations work only at the price of simplifying the properties of micro-agents, the rules of interaction and the nature of macro-structures so that they conveniently fit each other. (1.4)
- micro-macro models assume by construction that agents at the local level are incapable to understand and control the phenomena at the global level. (1.5)
And here is their key claim:
- Empirical studies show that, contrarily to what most social simulations assume, collective action does not originate at the micro level of individual atoms and does not end up in a macro level of stable structures. Instead, actions distribute in intricate and heterogeneous networks than fold and deploy creating differences but not discontinuities. (1.11)
This final statement could serve as a high-level paraphrase of actor-network theory, as presented by Latour in Reassembling the Social: An Introduction to Actor-Network-Theory. (Here is a brief description of actor-network theory and its minimalist social ontology; link.)
These criticisms parallel some of my own misgivings about simulation models, though I am somewhat more sympathetic to their use than VJ&L. Here are some of the concerns raised in earlier posts about the validity of various ABM approaches to social conflict (link, link):
- Simulations often produce results that appear to be artifacts rather than genuine social tendencies.
- Simulations leave out important features of the social world that are prima facie important to outcomes: for example, quality of leadership, quality and intensity of organization, content of appeals, differential pathways of appeals, and variety of political psychologies across agents.
- The factor of the influence of organizations is particularly important and non-local.
- Simulations need to incorporate actors at a range of levels, from individual to club to organization.
And here is the conclusion I drew in that post:
- But it is very important to recognize the limitations of these models as predictors of outcomes in specific periods and locations of unrest. These simulation models probably don't shed much light on particular episodes of contention in Egypt or Tunisia during the Arab Spring. The "qualitative" theories of contention that have been developed probably shed more light on the dynamics of contention than the simulations do at this point in their development.
But the confidence expressed by VJ&L in the new observability of social processes through digital tracing seems excessive to me. They offer a few good examples that support their case -- opinion change, for example (1.9). Here they argue that it is possible to map or track opinion change directly through digital footprints of interaction (Twitter, Facebook, blogging), and this is superior to abstract modeling of opinion change through social networks. No doubt we can learn something important about the dynamics of opinion change through this means.
But this is a very special case. Can we similarly "map" the spread of new political ideas and slogans during the Arab Spring? No, because the vast majority of those present in Tahrir Square were not tweeting and texting their experiences. Can we map the spread of anti-Muslim attitudes in Gujarat in 2002 leading to massive killings of Muslims in a short period of time? No, for the same reason: activists and nationalist gangs did not do us the historical courtesy of posting their thought processes in their Twitter feeds either. Can we study the institutional realities of the fiscal system of the Indonesian state through its digital traces? No. Can we study the prevalence and causes of official corruption in China through digital traces? Again, no.
In other words, there is a huge methodological problem with the idea of digital traceability, deriving from the fact that most social activity leaves no digital traces. There are problem areas where the traces are more accessible and more indicative of the underlying social processes; but this is a far cry from the utopia of total social legibility that appears to underlie the viewpoint expressed here.
So I'm not persuaded that the tools of digital tracing provide the full alternative to social simulation that these authors assert. And this implies that social simulation tools remain an important component of the social scientist's toolbox.
Wednesday, December 16, 2015
Must-Read: Kevin Hoover: The Methodology of Empirical Macroeconomics: The combination of representative-agent modeling and utility-based “microfoundations” was always a game of intellectual Three-Card Monte. Why do you ask? Why don’t we fund sociologists to investigate for what reasons–other than being almost guaranteed to produce conclusions ideologically-pleasing to some–it has flourished for a generation in spite of having no empirical support and no theoretical coherence?
Kevin Hoover: The Methodology of Empirical Macroeconomics: “Given what we know about representative-agent models…
…there is not the slightest reason for us to think that the conditions under which they should work are fulfilled. The claim that representative-agent models provide microfundations succeeds only when we steadfastly avoid the fact that representative-agent models are just as aggregative as old-fashioned Keynesian macroeconometric models. They do not solve the problem of aggregation; rather they assume that it can be ignored. ...
Saturday, November 28, 2015
Paul Krugman on macroeconomic models:
Demand, Supply, and Macroeconomic Models: I’m supposed to do a presentation next week about “shifts in economic models,” which has me trying to systematize my thought about what the crisis and aftermath have and haven’t changed my understanding of macroeconomics. And it seems to me that there is an important theme here: it’s the supply side, stupid. ...
Friday, November 20, 2015
Some Big Changes in Macroeconomic Thinking from Lawrence Summers: ...At a truly fascinating and intense conference on the global productivity slowdown we hosted earlier this week, Lawrence Summers put forward some newly and forcefully formulated challenges to the macroeconomic status quo in his keynote speech. [pdf] ...
The first point Summers raised ... pointed out that a major global trend over the last few decades has been the substantial disemployment—or withdrawal from the workforce—of relatively unskilled workers. ... In other words, it is a real puzzle to observe simultaneously multi-year trends of rising non-employment of low-skilled workers and declining measured productivity growth. ...
Another related major challenge to standard macroeconomics Summers put forward ... came in response to a question about whether he exaggerated the displacement of workers by technology. ... Summers bravely noted that if we suppose the “simple” non-economists who thought technology could destroy jobs without creating replacements in fact were right after all, then the world in some aspects would look a lot like it actually does today...
The third challenge ... Summers raised is perhaps the most profound... In a working paper the Institute just released, Olivier Blanchard, Eugenio Cerutti, and Summers examine essentially all of the recessions in the OECD economies since the 1960s, and find strong evidence that in most cases the level of GDP is lower five to ten years afterward than any prerecession forecast or trend would have predicted. In other words, to quote Summers’ speech..., “the classic model of cyclical fluctuations, that assume that they take place around the given trend is not the right model to begin the study of the business cycle. And [therefore]…the preoccupation of macroeconomics should be on lower frequency fluctuations that have consequences over long periods of time [that is, recessions and their aftermath].”
I have a lot of sympathy for this view. ... The very language we use to speak of business cycles, of trend growth rates, of recoveries of to those perhaps non-stationary trends, and so on—which reflects the underlying mental framework of most macroeconomists—would have to be rethought.
Productivity-based growth requires disruption in economic thinking just as it does in the real world.
The full text explains these points in more detail (I left out one point on the measurement of productivity).
Friday, November 13, 2015
Part of an interview of Dani Rodrik:
Q. You give a couple of examples in the book of the way theoretical errors can lead to policy errors. The first example you give concerns the “efficient markets hypothesis”. What role did an overestimation of the scope and explanatory power of that hypothesis play in the run-up to the global financial crisis of 2007-08?
A. If we take as our central model one under which the efficient markets hypothesis is correct—and that’s a model where there are a number of critical assumptions: one is rationality (we rule out behavioural aspects like bandwagons, excessive optimism and so on); second, we rule out externalities and agency problems—there’s a natural tendency in the policy world to liberalise as many markets as possible and to make regulation as light as possible. In the run-up to the financial crisis, if you’d looked at the steady increase in house prices or the growth of the shadow banking system from the perspective of the efficient markets hypothesis, they wouldn’t have bothered you at all. You’d tell a story about how wonderful financial liberalisation and innovation are—so many people, who didn’t have access before to mortgages, were now able to afford houses; here was a supreme example of free markets providing social benefits.
But if you took the same [set of] facts, and applied the kind of models that people who had been looking at sovereign debt crises in emerging markets had been developing—boom and bust cycles, behavioural biases, agency problems, externalities, too-big-to-fail problems—if you applied those tools to the same facts, you’d get a very different kind of story. I wish we’d put greater weight on stories of the second kind rather than the first. We’d have been better off if we’d done so.
Tuesday, November 03, 2015
Advanced economies are so sick we need a new way to think about them: ...Hysteresis Effects Blanchard Cerutti and I look at a sample of over 100 recessions from industrial countries over the last 50 years and examine their impact on long run output levels in an effort to understand what Blanchard and I had earlier called hysteresis effects. We find that in the vast majority of cases output never returns to previous trends. Indeed there appear to be more cases where recessions reduce the subsequent growth of output than where output returns to trend. In other words “super hysteresis” to use Larry Ball’s term is more frequent than “no hysteresis.” ...
In subsequent work Antonio Fatas and I have looked at the impact of fiscal policy surprises on long run output and long run output forecasts using a methodology pioneered by Blanchard and Leigh. ... We find that fiscal policy changes have large continuing effects on levels of output suggesting the importance of hysteresis. ...
Towards a New Macroeconomics My separate comments in the volume develop an idea I have pushed with little success for a long time. Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent this is true of the DSGE (dynamic stochastic general equilibrium) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations not on the level of output. This assumption is problematic...
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics.
Friday, October 30, 2015
The missing lowflation revolution: It will soon be eight years since the US Federal Reserve decided to bring its interest rate down to 0%. Other central banks have spent similar number of years (or much longer in the case of Japan) stuck at the zero lower bound. In these eight years central banks have used all their available tools to increase inflation closer to their target and boost growth with limited success. GDP growth has been weak or anemic, and there is very little hope that economies will ever go back to their pre-crisis trends.
Some of these trends have challenged the traditional view of academic economists and policy makers about how an economy works. ...
My own sense is that the view among academics and policy makers is not changing fast enough and some are just assuming that this would be a one-time event that will not be repeated in the future (even if we are still not out of the current event!).
The comparison with the 70s when stagflation produced a large change in the way academic and policy makers thought about their models and about the framework for monetary policy is striking. During those year a high inflation and low growth environment created a revolution among academics (moving away from the simple Phillips Curve) and policy makers (switching to anti-inflationary and independent central banks). How many more years of zero interest rate will it take to witness a similar change in our economic analysis?
Wednesday, October 14, 2015
In case this is something you want to discuss (if not, that's okay too -- I got tired of this debate long, long ago):
In Search of the Science in Economics, by Noah Smith: ...I’d like to discuss the idea that economics is only a social science, and should discard its mathematical pretensions and return to a more literary approach.
First, let’s talk about the idea that when you put the word “social” in front of “science,” everything changes. The idea here is that you can’t discover hard-and-fast principles that govern human behavior, or the actions of societies, the way physicists derive laws of motion for particles or biologists identify the actions of the body’s various systems. You hear people say this all the time.
But is it true? As far as I can tell, it’s just an assertion, with little to back it up. No one has discovered a law of the universe that you can’t discover patterns in human societies. Sure, it’s going to be hard -- a human being is vastly more complicated than an electron. But there is no obvious reason why the task is hopeless.
To the contrary, there have already been a great many successes. ...
What about math? .... I do think economists would often benefit from closer observation of the real world. ... But that doesn’t mean math needs to go. Math allows quantitative measurement and prediction, which literary treatises do not. ...
So yes, social science can be science. There will always be a place in the world for people who walk around penning long, literary tomes full of vague ideas about how humans and societies function. But thanks to quantitative social science, we now have additional tools at our disposal. Those tools have already improved our world, and to throw them away would be a big mistake.
This is from a post of mine in August, 2009 on the use of mathematics in economics:
Lucas roundtable: Ask the right questions, by Mark Thoma: In his essay, Robert Lucas defends macroeconomics against the charge that it is "valueless, even harmful", and that the tools economists use are "spectacularly useless".
I agree that the analytical tools economists use are not the problem. We cannot fully understand how the economy works without employing models of some sort, and we cannot build coherent models without using analytic tools such as mathematics. Some of these tools are very complex, but there is nothing wrong with sophistication so long as sophistication itself does not become the main goal, and sophistication is not used as a barrier to entry into the theorist's club rather than an analytical device to understand the world.
But all the tools in the world are useless if we lack the imagination needed to build the right models. Models are built to answer specific questions. When a theorist builds a model, it is an attempt to highlight the features of the world the theorist believes are the most important for the question at hand. For example, a map is a model of the real world, and sometimes I want a road map to help me find my way to my destination, but other times I might need a map showing crop production, or a map showing underground pipes and electrical lines. It all depends on the question I want to answer. If we try to make one map that answers every possible question we could ever ask of maps, it would be so cluttered with detail it would be useless. So we necessarily abstract from real world detail in order to highlight the essential elements needed to answer the question we have posed. The same is true for macroeconomic models.
But we have to ask the right questions before we can build the right models.
The problem wasn't the tools that macroeconomists use, it was the questions that we asked. The major debates in macroeconomics had nothing to do with the possibility of bubbles causing a financial system meltdown. That's not to say that there weren't models here and there that touched upon these questions, but the main focus of macroeconomic research was elsewhere. ...
The interesting question to me, then, is why we failed to ask the right questions. ... Was it lack of imagination, was it the sociology within the profession, the concentration of power over what research gets highlighted, the inadequacy of the tools we brought to the problem, the fact that nobody will ever be able to predict these types of events, or something else?
It wasn't the tools, and it wasn't lack of imagination. As Brad DeLong points out, the voices were there—he points to Michael Mussa for one—but those voices were not heard. Nobody listened even though some people did see it coming. So I am more inclined to cite the sociology within the profession or the concentration of power as the main factors that caused us to dismiss these voices. ...
I don't know for sure the extent to which the ability of a small number of people in the field to control the academic discourse led to a concentration of power that stood in the way of alternative lines of investigation, or the extent to which the ideology that markets prices always tend to move toward their long-run equilibrium values and that markets will self-insure, caused us to ignore voices that foresaw the developing bubble and coming crisis. But something caused most of us to ask the wrong questions, and to dismiss the people who got it right, and I think one of our first orders of business is to understand how and why that happened.
Monday, October 05, 2015
Andrew Chang and Phillip Li:
Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say “Usually Not”, by Andrew C. Chang and Phillip Li, Finance and Economics Discussion Series 2015-083. Washington: Board of Governors of the Federal Reserve System: Abstract We attempt to replicate 67 papers published in 13 well-regarded economics journals using author-provided replication files that include both data and code. Some journals in our sample require data and code replication files, and other journals do not require such files. Aside from 6 papers that use confidential data, we obtain data and code replication files for 29 of 35 papers (83%) that are required to provide such files as a condition of publication, compared to 11 of 26 papers (42%) that are not required to provide data and code replication files. We successfully replicate the key qualitative result of 22 of 67 papers (33%) without contacting the authors. Excluding the 6 papers that use confidential data and the 2 papers that use software we do not possess, we replicate 29 of 59 papers (49%) with assistance from the authors. Because we are able to replicate less than half of the papers in our sample even with help from the authors, we assert that economics research is usually not replicable. We conclude with recommendations on improving replication of economics research.
Saturday, September 26, 2015
From an interview with Olivier Blanchard:
...IMF Survey: In pushing the envelope, you also hosted three major Rethinking Macroeconomics conferences. What were the key insights and what are the key concerns on the macroeconomic front?
Blanchard: Let me start with the obvious answer: That mainstream macroeconomics had taken the financial system for granted. The typical macro treatment of finance was a set of arbitrage equations, under the assumption that we did not need to look at who was doing what on Wall Street. That turned out to be badly wrong.
But let me give you a few less obvious answers:
The financial crisis raises a potentially existential crisis for macroeconomics. Practical macro is based on the assumption that there are fairly stable aggregate relations, so we do not need to keep track of each individual, firm, or financial institution—that we do not need to understand the details of the micro plumbing. We have learned that the plumbing, especially the financial plumbing, matters: the same aggregates can hide serious macro problems. How do we do macro then?
As a result of the crisis, a hundred intellectual flowers are blooming. Some are very old flowers: Hyman Minsky’s financial instability hypothesis. Kaldorian models of growth and inequality. Some propositions that would have been considered anathema in the past are being proposed by "serious" economists: For example, monetary financing of the fiscal deficit. Some fundamental assumptions are being challenged, for example the clean separation between cycles and trends: Hysteresis is making a comeback. Some of the econometric tools, based on a vision of the world as being stationary around a trend, are being challenged. This is all for the best.
Finally, there is a clear swing of the pendulum away from markets towards government intervention, be it macro prudential tools, capital controls, etc. Most macroeconomists are now solidly in a second best world. But this shift is happening with a twist—that is, with much skepticism about the efficiency of government intervention. ...
Sunday, September 13, 2015
Botox for Development: In a talk at the World Bank that I gave last week, I repeated a riff that I’ve used before. Suppose your internist told you:The x-ray shows a mass that is probably cancer, but we don’t have any good randomized clinical trials showing that your surgeon’s recommendation, operating to remove it, actually causes the remission that tends to follow. However, we do have an extremely clever clinical trial showing conclusively that Botox will make you look younger. So my recommendation is that you wait for some better studies before doing anything about the tumor but that I give you some Botox injections.”
If it were me, I’d get a new internist.
To be sure, researchers would always prefer data from randomized treatments... Unfortunately, randomization is not free. It is available at low or moderate cost for some treatments and at a prohibitively high cost for other potentially important treatments. ...
I work on high expected-return policies that can be implemented, with no concern about whether I will be able to publish the results from this work in the standard economics journals....
I have the good fortune of knowing that I can be a successful academic even if the journals will not publish results from the work I do. I realize that many other economists do not have this freedom. I understand that they have to respond to the incentives they face, and that the publication process biases their work in the direction of policies that are more like Botox than surgery.
But we can all work to change the existing equilibrium. It is good that economists pay careful attention to identification and causality. This inclination will be even more important as new sources of “big” non-experimental data become available. But it is not the only good thing we can do. We have to weigh the tradeoffs we face between getting precise answers about such policies as setting up women’s self-help groups (the example that Lant Pritchett uses as his illustration of what I am calling Botox for economic development) versus such other policies as facilitating urbanization or migration that offer returns that are uncertain but have an expected value that is larger by many orders of magnitude.
If economists can’t understand the tradeoff between risk and expected return, who can?
Saturday, September 05, 2015
Range of reactions to realism about the social world: My recent post on realism in the social realm generated quite a bit of commentary, which I'd like to address here.
Brad Delong offered an incredulous response -- he seems to think that any form of scientific realism is ridiculous (link). He refers to the predictive success of Ptolemy's epicycles, and then says, "But just because your theory is good does not mean that the entities in your theory are "really there", whatever that might mean...." I responded on Twitter: "Delong doesn't like scientific realism -- really? Electrons, photons, curvature of space - all convenient fictions?" The position of instrumentalism is intellectually untenable, in my opinion -- the idea that scientific theories are just convenient computational devices for summarizing a range of observations. It is hard to see why we would have confidence in any complex technology depending on electricity, light, gravity, the properties of metals and semiconductors, if we didn't think that our scientific theories of these things were approximately true of real things in the world. So general rejection of scientific realism seems irrational to me. But the whole point of the post was that this reasoning doesn't extend over to the social sciences very easily; if we are to be realists about social entities, it needs to be on a different basis than the overall success of theories like Keynsianism, Marxism, or Parsonian sociology. They just aren't that successful!
There were quite a few comments (71) when Mark Thoma reposted this piece on economistsview. A number of the commentators were particularly interested in the question of the realism of economic knowledge. Daniel Hausman addresses the question of realism in economics in his article on the philosophy of economics in the Stanford Encyclopedia of Philosophy (link):Economic methodologists have paid little attention to debates within philosophy of science between realists and anti-realists (van Fraassen 1980, Boyd 1984), because economic theories rarely postulate the existence of unobservable entities or properties, apart from variants of “everyday unobservables,” such as beliefs and desires. Methodologists have, on the other hand, vigorously debated the goals of economics, but those who argue that the ultimate goals are predictive (such as Milton Friedman) do so because of their interest in policy, not because they seek to avoid or resolve epistemological and semantic puzzles concerning references to unobservables.
Examples of economic concepts that commentators seemed to think could be interpreted realistically include concepts such as "economic disparity". But this isn't a particularly arcane or unobservable theoretical concept. There is a lot of back-and-forth on the meaning of investment in Keynes's theory -- is it a well-defined concept? Is it a concept that can be understood realistically? The question of whether economics consists of a body of theory that might be interpreted realistically is a complicated one. Many technical economic concepts seem not to be referential; instead, they seem to be abstract concepts summarizing the results of large numbers of interactions by economic agents.
The most famous discussion of realism in economics is that offered by Milton Friedman in relation to the idea of economic rationality (Essays in Positive Economics); he doubts that economists need to assume that real economic actors do so on the basis of economic rationality. Rather, according to Friedman this is just a simplifying assumption to allow us to summarize a vast range of behavior. This is a hard position to accept, though; if agents are not making calculating choices about costs and benefits, then why should we expect a market to work in the ways our theories say it should? (Here is a good critique by Bruce Caldwell of Friedman's instrumentalism; link.)
And what about the concept of a market itself? Can we understand this concept realistically? Do markets really exist? Maybe the most we can say is something like this: there are many social settings where stuff is produced and exchanged. When exchange is solely or primarily governed by the individual self-interest of the buyers and sellers, we can say that a market exists. But we must also be careful to add that there are many different institutional and social settings where this condition is satisfied, so there is great variation across the particular "market settings" of different societies and communities. As a result, we need to be careful not to reify the concept of a market across all settings.
Michiel van Ingen made a different sort of point about my observations about social realism in his comment offered on Facebook. He thinks I am too easy on the natural sciences.This piece strikes me as problematic. First, because physics is by no means as successful at prediction as it seems to suggest. A lot of physics is explanatorily quite powerful, but - like any other scientific discipline - can only predict in systemically closed systems. Contrasting physics with sociology and political science because the latter 'do not consist of unified deductive systems whose empirical success depends upon a derivation of distant observational consequences' is therefore unnecessarily dualistic. In addition, I'm not sure why the 'inference to the best explanation' element should be tied to predictive success as closely as it is in this piece. Inference to the best explanation is, by its very definition, perfectly applicable to EXPLANATION. And this applies across the sciences, whether 'natural' or 'social', though of course there is a significant difference between those sciences in which experimentation is plausible and helpful, and those in which it is not. This is not, by the way, the same as saying that natural sciences are experimental and social ones aren't. There are plenty of natural sciences which are largely non-experimental as well. And lest we forget, the hypothetico-deductive form of explanation DOES NOT WORK IN THE NATURAL SCIENCES EITHER!
This critique comes from the general idea that the natural sciences need a bit of debunking, in that various areas of natural science fail to live up to the positivist ideal of a precise predictive system of laws. That is fair enough; there are areas of imprecision and uncertainty in the natural sciences. But, as I responded to Delong above, the fact remains that we have a very good understanding of much of the physical realities and mechanisms that generate the phenomena we live with. Here is the response I offered Michiel:Thank you, Michiel, for responding so thoughtfully. Your comments and qualifications about the natural sciences are correct, of course, in a number of ways. But really, I think we post-positivists need to recognize that the core areas of fundamental and classical physics, electromagnetic theory, gravitation theory, and chemistry including molecular biology, are remarkably successful in unifying, predicting, and explaining the phenomena within these domains. They are successful because extensive and mathematicized theories have been developed and extended, empirically tested, refined, and deployed to help account for new phenomena. And these theories, as big chunks, make assertions about the way nature works. This is where realism comes in: the chunks of theories about the nature of the atom, electromagnetic forces, gravitation, etc., can be understood to be approximately true of nature because otherwise we would have no way to account for the remarkable ability of these theories to handle new phenomena.
So I haven't been persuaded to change my mind about social realism as a result of these various comments. The grounds for realism about social processes, structures, and powers are different for many social sciences than for many natural sciences. We can probe quite a bit of the social world through mid-level and piecemeal research methods -- which means that we can learn quite a bit about the nature of the social world through these methods. Here is the key finding:So it seems that we can justify being realists about class, field, habitus, market, coalition, ideology, organization, value system, ethnic identity, institution, and charisma, without relying at all on the hypothetico-deductive model of scientific knowledge upon which the "inference to the best explanation" argument depends. We can look at sociology and political science as loose ensembles of empirically informed theories and models of meso-level social processes and mechanisms, each of which is to a large degree independently verifiable. And this implies that social realism should be focused on mid-level social mechanisms and processes that can be identified in the domains of social phenomena that we have studied rather than sweeping concepts of social structures and entities.
(Sometimes social media debates give the impression of a nineteenth-century parliamentary shouting match -- which is why the Daumier drawing came to mind!)
Thursday, August 27, 2015
The day macroeconomics changed: It is of course ludicrous, but who cares. The day of the Boston Fed conference in 1978 is fast taking on a symbolic significance. It is the day that Lucas and Sargent changed how macroeconomics was done. Or, if you are Paul Romer, it is the day that the old guard spurned the ideas of the newcomers, and ensured we had a New Classical revolution in macro rather than a New Classical evolution. Or if you are Ray Fair..., who was at the conference, it is the day that macroeconomics started to go wrong.
Ray Fair is a bit of a hero of mine. ...
I agree with Ray Fair that what he calls Cowles Commission (CC) type models, and I call Structural Econometric Model (SEM) type models, together with the single equation econometric estimation that lies behind them, still have a lot to offer, and that academic macro should not have turned its back on them. Having spent the last fifteen years working with DSGE models, I am more positive about their role than Fair is. Unlike Fair, I want “more bells and whistles on DSGE models”. I also disagree about rational expectations...
Three years ago, when Andy Haldane suggested that DSGE models were partly to blame for the financial crisis, I wrote a post that was critical of Haldane. What I thought then, and continue to believe, is that the Bank had the information and resources to know what was happening to bank leverage, and it should not be using DSGE models as an excuse for not being more public about their concerns at the time.
However, if we broaden this out from the Bank to the wider academic community, I think he has a legitimate point. ...
What about the claim that only internally consistent DSGE models can give reliable policy advice? For another project, I have been rereading an AEJ Macro paper written in 2008 by Chari et al, where they argue that New Keynesian models are not yet useful for policy analysis because they are not properly microfounded. They write “One tradition, which we prefer, is to keep the model very simple, keep the number of parameters small and well-motivated by micro facts, and put up with the reality that such a model neither can nor should fit most aspects of the data. Such a model can still be very useful in clarifying how to think about policy.” That is where you end up if you take a purist view about internal consistency, the Lucas critique and all that. It in essence amounts to the following approach: if I cannot understand something, it is best to assume it does not exist.
Wednesday, August 26, 2015
The Future of Macro: There is an interesting set of recent blogs--- Paul Romer 1, Paul Romer 2, Brad DeLong, Paul Krugman, Simon Wren-Lewis, and Robert Waldmann---on the history of macro beginning with the 1978 Boston Fed conference, with Lucas and Sargent versus Solow. As Romer notes, I was at this conference and presented a 97-equation model. This model was in the Cowles Commission (CC) tradition, which, as the blogs note, quickly went out of fashion after 1978. (In the blogs, models in the CC tradition are generally called simulation models or structural econometric models or old fashioned models. Below I will call them CC models.)
I will not weigh in on who was responsible for what. Instead, I want to focus on what future direction macro research might take. There is unhappiness in the blogs, to varying degrees, with all three types of models: DSGE, VAR, CC. Also, Wren-Lewis points out that while other areas of economics have become more empirical over time, macroeconomics has become less. The aim is for internal theoretical consistency rather than the ability to track the data.
I am one of the few academics who has continued to work with CC models. They were rejected for basically three reasons: they do not assume rational expectations (RE), they are not identified, and the theory behind them is ad hoc. This sounds serious, but I think it is in fact not. ...
He goes on to explain why. He concludes with:
... What does this imply about the best course for future research? I don't get a sense from the blog discussions that either the DSGE methodology or the VAR methodology is the way to go. Of course, no one seems to like the CC methodology either, but, as I argue above, I think it has been dismissed too easily. I have three recent methodological papers arguing for its use: Has Macro Progressed?, Reflections on Macroeconometric Modeling, and Information Limits of Aggregate Data. I also show in Household Wealth and Macroeconomic Activity: 2008--2013 that CC models can be used to examine a number of important questions about the 2008--2009 recession, questions that are hard to answer using DSGE or VAR models.
So my suggestion for future macro research is not more bells and whistles on DSGE models, but work specifying and estimating stochastic equations in the CC tradition. Alternative theories can be tested and hopefully progress can be made on building models that explain the data well. We have much more data now and better techniques than we did in 1978, and we should be able to make progress and bring macroeconomics back to it empirical roots.
For those who want more detail, I have gathered all of my research in macro in one place: Macroeconometric Modeling, November 11, 2013.
Sunday, August 23, 2015
From an interview of Paul Romer in the WSJ:
...Q: What kind of feedback have you received from colleagues in the profession?
A: I tried these ideas on a few people, and the reaction I basically got was “don’t make waves.” As people have had time to react, I’ve been hearing a bit more from people who appreciate me bringing these issues to the forefront. The most interesting feedback is from young economists who say that they feel that they have to be very cautious, and they don’t want to get somebody cross at them. There’s a concern by young economists that if they deviate from what’s acceptable, they’ll get in trouble. That also seemed to me to be a sign of something that is really wrong. Young people are the ones who often come in and say, “You all have been thinking about this the wrong way, here’s a better way to think about it.”
Q: Are there any areas where research or refinements in methodology have brought us closer to understanding the economy?
A: There was an interesting  Nobel prize in [economics], where they gave the prize to people who generally came to very different conclusions about how financial markets work. Gene Fama ... got it for the efficient markets hypothesis. Robert Shiller ... for this view that these markets are not efficient...
It was striking because usually when you give a prize, it’s because in the sciences, you’ve converged to a consensus. ...
Friday, August 21, 2015
Paul Romer's latest entry on "mathiness" in economics ends with:
Reactions to Solow’s Choice: ...Politics maps directly onto our innate moral machinery. Faced with any disagreement, our moral systems respond by classifying people into our in-group and the out-group. They encourage us to be loyal to members of the in-group and hostile to members of the out-group. The leaders of an in-group demand deference and respect. In selecting leaders, we prize unwavering conviction.
Science can’t function with the personalization of disagreement that these reactions encourage. The question of whether Joan Robinson is someone who is admired and respected as a scientist has to be separated from the question about whether she was right that economists could reason about rates of return in a model that does not have an explicit time dimension.
The only in-group versus out-group distinction that matters in science is the one that distinguishes people who can live by the norms of science from those who cannot. Feynman integrity is the marker of an insider.
In this group, it is flexibility that commands respect, not unwavering conviction. Clearly articulated disagreement is encouraged. Anyone’s claim is subject to challenge. Someone who is right about A can be wrong about B.
Scientists do not demonize dissenters. Nor do they worship heroes.
[The reference to Joan Robinson is clarified in the full text.]
Monday, August 17, 2015
This is the abstract, introduction, and final section of a recent paper by Joe Stiglitz on theoretical models of deep depressions (as he notes, it's "an extension of the Presidential Address to the International Economic Association"):
Towards a General Theory of Deep Downturns, by Joseph E. Stiglitz, NBER Working Paper No. 21444, August 2015: Abstract This paper, an extension of the Presidential Address to the International Economic Association, evaluates alternative strands of macro-economics in terms of the three basic questions posed by deep downturns: What is the source of large perturbations? How can we explain the magnitude of volatility? How do we explain persistence? The paper argues that while real business cycles and New Keynesian theories with nominal rigidities may help explain certain historical episodes, alternative strands of New Keynesian economics focusing on financial market imperfections, credit, and real rigidities provides a more convincing interpretation of deep downturns, such as the Great Depression and the Great Recession, giving a more plausible explanation of the origins of downturns, their depth and duration. Since excessive credit expansions have preceded many deep downturns, particularly important is an understanding of finance, the credit creation process and banking, which in a modern economy are markedly different from the way envisioned in more traditional models.
Introduction The world has been plagued by episodic deep downturns. The crisis that began in 2008 in the United States was the most recent, the deepest and longest in three quarters of a century. It came in spite of alleged “better” knowledge of how our economic system works, and belief among many that we had put economic fluctuations behind us. Our economic leaders touted the achievement of the Great Moderation. As it turned out, belief in those models actually contributed to the crisis. It was the assumption that markets were efficient and self-regulating and that economic actors had the ability and incentives to manage their own risks that had led to the belief that self-regulation was all that was required to ensure that the financial system worked well , an d that there was no need to worry about a bubble . The idea that the economy could, through diversification, effectively eliminate risk contributed to complacency — even after it was evident that there had been a bubble. Indeed, even after the bubble broke, Bernanke could boast that the risks were contained. These beliefs were supported by (pre-crisis) DSGE models — models which may have done well in more normal times, but had little to say about crises. Of course, almost any “decent” model would do reasonably well in normal times. And it mattered little if, in normal times , one model did a slightly better job in predicting next quarter’s growth. What matters is predicting — and preventing — crises, episodes in which there is an enormous loss in well-being. These models did not see the crisis coming, and they had given confidence to our policy makers that, so long as inflation was contained — and monetary authorities boasted that they had done this — the economy would perform well. At best, they can be thought of as (borrowing the term from Guzman (2014) “models of the Great Moderation,” predicting “well” so long as nothing unusual happens. More generally, the DSGE models have done a poor job explaining the actual frequency of crises.
Of course, deep downturns have marked capitalist economies since the beginning. It took enormous hubris to believe that the economic forces which had given rise to crises in the past were either not present, or had been tamed, through sound monetary and fiscal policy. It took even greater hubris given that in many countries conservatives had succeeded in dismantling the regulatory regimes and automatic stabilizers that had helped prevent crises since the Great Depression. It is noteworthy that my teacher, Charles Kindleberger, in his great study of the booms and panics that afflicted market economies over the past several hundred years had noted similar hubris exhibited in earlier crises. (Kindleberger, 1978)
Those who attempted to defend the failed economic models and the policies which were derived from them suggested that no model could (or should) predict well a “once in a hundred year flood.” But it was not just a hundred year flood — crises have become common . It was not just something that had happened to the economy. The crisis was man-made — created by the economic system. Clearly, something is wrong with the models.
Studying crises is important, not just to prevent these calamities and to understand how to respond to them — though I do believe that the same inadequate models that failed to predict the crisis also failed in providing adequate responses. (Although those in the US Administration boast about having prevented another Great Depression, I believe the downturn was certainly far longer, and probably far deeper, than it need to have been.) I also believe understanding the dynamics of crises can provide us insight into the behavior of our economic system in less extreme times.
This lecture consists of three parts. In the first, I will outline the three basic questions posed by deep downturns. In the second, I will sketch the three alternative approaches that have competed with each other over the past three decades, suggesting that one is a far better basis for future research than the other two. The final section will center on one aspect of that third approach that I believe is crucial — credit. I focus on the capitalist economy as a credit economy , and how viewing it in this way changes our understanding of the financial system and monetary policy. ...
He concludes with:
IV. The crisis in economics The 2008 crisis was not only a crisis in the economy, but it was also a crisis for economics — or at least that should have been the case. As we have noted, the standard models didn’t do very well. The criticism is not just that the models did not anticipate or predict the crisis (even shortly before it occurred); they did not contemplate the possibility of a crisis, or at least a crisis of this sort. Because markets were supposed to be efficient, there weren’t supposed to be bubbles. The shocks to the economy were supposed to be exogenous: this one was created by the market itself. Thus, the standard model said the crisis couldn’t or wouldn’t happen ; and the standard model had no insights into what generated it.
Not surprisingly, as we again have noted, the standard models provided inadequate guidance on how to respond. Even after the bubble broke, it was argued that diversification of risk meant that the macroeconomic consequences would be limited. The standard theory also has had little to say about why the downturn has been so prolonged: Years after the onset of the crisis, large parts of the world are operating well below their potential. In some countries and in some dimension, the downturn is as bad or worse than the Great Depression. Moreover, there is a risk of significant hysteresis effects from protracted unemployment, especially of youth.
The Real Business Cycle and New Keynesian Theories got off to a bad start. They originated out of work undertaken in the 1970s attempting to reconcile the two seemingly distant branches of economics, macro-economics, centering on explaining the major market failure of unemployment, and microeconomics, the center piece of which was the Fundamental Theorems of Welfare Economics, demonstrating the efficiency of markets. Real Business Cycle Theory (and its predecessor, New Classical Economics) took one route: using the assumptions of standard micro-economics to construct an analysis of the aggregative behavior of the economy. In doing so, they left Hamlet out of the play: almost by assumption unemployment and other market failures didn’t exist. The timing of their work couldn’t have been worse: for it was just around the same time that economists developed alternative micro-theories, based on asymmetric information, game theory, and behavioral economics, which provided better explanations of a wide range of micro-behavior than did the traditional theory on which the “new macro - economics” was being constructed. At the same time, Sonnenschein (1972) and Mantel (1974) showed that the standard theory provided essentially no structure for macro- economics — essentially any demand or supply function could have been generated by a set of diverse rational consumers. It was the unrealistic assumption of the representative agent that gave theoretical structure to the macro-economic models that were being developed. (As we noted, New Keynesian DSGE models were but a simple variant of these Real Business Cycles, assuming nominal wage and price rigidities — with explanations, we have suggested, that were hardly persuasive.)
There are alternative models to both Real Business Cycles and the New Keynesian DSGE models that provide better insights into the functioning of the macroeconomy, and are more consistent with micro- behavior, with new developments of micro-economics, with what has happened in this and other deep downturns . While these new models differ from the older ones in a multitude of ways, at the center of these models is a wide variety of financial market imperfections and a deep analysis of the process of credit creation. These models provide alternative (and I believe better) insights into what kinds of macroeconomic policies would restore the economy to prosperity and maintain macro-stability.
This lecture has attempted to sketch some elements of these alternative approaches. There is a rich research agenda ahead.
Tuesday, August 11, 2015
My editor suggested that I might want to write about an article in New Scientist, After the crash, can biologists fix economics?, so I did:
Macroeconomics: The Roads Not Yet Taken: Anyone who is even vaguely familiar with economics knows that modern macroeconomic models did not fare well before and during the Great Recession. For example, when the recession hit many of us reached into the policy response toolkit provided by modern macro models and came up mostly empty.
The problem was that modern models were built to explain periods of mild economic fluctuations, a period known as the Great Moderation, and while the models provided very good policy advice in that setting they had little to offer in response to major economic downturns. That changed to some extent as the recession dragged on and modern models were quickly amended to incorporate important missing elements, but even then the policy advice was far from satisfactory and mostly echoed what we already knew from the “old-fashioned” Keynesian model. (The Keynesian model was built to answer the important policy questions that come with major economic downturns, so it is not surprising that amended modern models reached many of the same conclusions.)
How can we fix modern models? ...
Trash Talk and the Macroeconomic Divide: ... In Lucas and Sargent, much is made of stagflation; the coexistence of inflation and high unemployment is their main, indeed pretty much only, piece of evidence that all of Keynesian economics is useless. That was wrong, but never mind; how did they respond in the face of strong evidence that their own approach didn’t work?
Such evidence wasn’t long in coming. In the early 1980s the Federal Reserve sharply tightened monetary policy; it did so openly, with much public discussion, and anyone who opened a newspaper should have been aware of what was happening. The clear implication of Lucas-type models was that such an announced, well-understood monetary change should have had no real effect, being reflected only in the price level.
In fact, however, there was a very severe recession — and a dramatic recovery once the Fed, again quite openly, shifted toward monetary expansion.
These events definitely showed that Lucas-type models were wrong, and also that anticipated monetary shocks have real effects. But there was no reconsideration on the part of the freshwater economists; my guess is that they were in part trapped by their earlier trash-talking. Instead, they plunged into real business cycle theory (which had no explanation for the obvious real effects of Fed policy) and shut themselves off from outside ideas. ...
Tuesday, August 04, 2015
On the road again, so just a couple of quick posts. This is Paul Krugman:
Sarcasm and Science: Paul Romer continues his discussion of the wrong turn of freshwater economics, responding in part to my own entry, and makes a surprising suggestion — that Lucas and his followers were driven into their adversarial style by Robert Solow’s sarcasm...
Now, it’s true that people can get remarkably bent out of shape at the suggestion that they’re being silly and foolish. ...
But Romer’s account of the great wrong turn still sounds much too contingent to me...
At least as I perceived it then — and remember, I was a grad student as much of this was going on — there were two other big factors.
First, there was a political component. Equilibrium business cycle theory denied that fiscal or monetary policy could play a useful role in managing the economy, and this was a very appealing conclusion on one side of the political spectrum. This surely was a big reason the freshwater school immediately declared total victory over Keynes well before its approach had been properly vetted, and why it could not back down when the vetting actually took place and the doctrine was found wanting.
Second — and this may be less apparent to non-economists — there was the toolkit factor. Lucas-type models introduced a new set of modeling and mathematical tools — tools that required a significant investment of time and effort to learn, but which, once learned, let you impress everyone with your technical proficiency. For those who had made that investment, there was a real incentive to insist that models using those tools, and only models using those tools, were the way to go in all future research. ...
And of course at this point all of these factors have been greatly reinforced by the law of diminishing disciples: Lucas’s intellectual grandchildren are utterly unable to consider the possibility that they might be on the wrong track.
Sunday, August 02, 2015
Paul Krugman follows up on Paul Romer's latest attack on "mathiness":
Freshwater’s Wrong Turn (Wonkish): Paul Romer has been writing a series of posts on the problem he calls “mathiness”, in which economists write down fairly hard-to-understand mathematical models accompanied by verbal claims that don’t actually match what’s going on in the math. Most recently, he has been recounting the pushback he’s getting from freshwater macro types, who seem him as allying himself with evil people like me — whereas he sees them as having turned away from science toward a legalistic, adversarial form of pleading.
You can guess where I stand on this. But in his latest, he notes some of the freshwater types appealing to their glorious past, claiming that Robert Lucas in particular has a record of intellectual transparency that should insulate him from criticism now. PR replies that Lucas once was like that, but no longer, and asks what happened.
Well, I’m pretty sure I know the answer. ...
It's hard to do an extract capturing all the points, so you'll likely want to read the full post, but in summary:
So what happened to freshwater, I’d argue, is that a movement that started by doing interesting work was corrupted by its early hubris; the braggadocio and trash-talking of the 1970s left its leaders unable to confront their intellectual problems, and sent them off on the path Paul now finds so troubling.
Recent tweets, email, etc. in response to posts I've done on mathiness reinforce just how unwilling many are to confront their tribalism. In the past, I've blamed the problems in macro on, in part, the sociology within the profession (leading to a less than scientific approach to problems as each side plays the advocacy game) and nothing that has happened lately has altered that view.
Saturday, August 01, 2015
Microfoundations 2.0?: The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear in this book or in the 2007 revised edition.
Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)
The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).
The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)
Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"
In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?
One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.
There are several important recent challenges to aspects of the microfoundations approach, however.
So what are the recent challenges? First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.
Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.
Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.
All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?
In once sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).
But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?
I've talked about this many times, e.g., but it's worth making this point about aggregating from individual agents to macroeconomic aggregates once again (it deals, for one, with the emergent properties objection above -- it's the reason representative agent models are used, it seems to avoid the aggregation issue). This is from Kevin Hoover:
... Exact aggregation requires that utility functions be identical and homothetic … Translated into behavioral terms, it requires that every agent subject to aggregation have the same preferences (you must share the same taste for chocolate with Warren Buffett) and those preferences must be the same except for a scale factor (Warren Buffet with an income of $10 billion per year must consume one million times as much chocolate as Warren Buffet with an income of $10,000 per year). This is not the world that we live in. The Sonnenschein-Mantel-Debreu theorem shows theoretically that, in an idealized general-equilibrium model in which each individual agent has a regularly specified preference function, aggregate excess demand functions inherit only a few of the regularity properties of the underlying individual excess demand functions: continuity, homogeneity of degree zero (i.e., the independence of demand from simple rescalings of all prices), Walras’s law (i.e., the sum of the value of all excess demands is zero), and that demand rises as price falls (i.e., that demand curves ceteris paribus income effects are downward sloping) … These regularity conditions are very weak and put so few restrictions on aggregate relationships that the theorem is sometimes called “the anything goes theorem.”
The importance of the theorem for the representative-agent model is that it cuts off any facile analogy between even empirically well-established individual preferences and preferences that might be assigned to a representative agent to rationalize observed aggregate demand. The theorem establishes that, even in the most favorable case, there is a conceptual chasm between the microeconomic analysis and the macroeconomic analysis. The reasoning of the representative-agent modelers would be analogous to a physicist attempting to model the macro- behavior of a gas by treating it as single, room-size molecule. The theorem demonstrates that there is no warrant for the notion that the behavior of the aggregate is just the behavior of the individual writ large: the interactions among the individual agents, even in the most idealized model, shapes in an exceedingly complex way the behavior of the aggregate economy. Not only does the representative-agent model fail to provide an analysis of those interactions, but it seems likely that that they will defy an analysis that insists on starting with the individual, and it is certain that no one knows at this point how to begin to provide an empirically relevant analysis on that basis.
Friday, July 31, 2015
More from Paul Romer:
Freshwater Feedback Part 1: “Everybody does it”: You can boil my claim about mathiness down to two assertions:
1. Economist N did X.
2. X is wrong because it undermines the scientific method.
#1 is a positive assertion, a statement about “what is …”#2 is a normative assertion, a statement about “what ought …” As you would expect from an economist, the normative assertion in #2 is based on what I thought would be a shared premise: that the scientific method is a better way to determine what is true about economic activity than any alternative method, and that knowing what is true is valuable.
In conversations with economists who are sympathetic to the freshwater economists I singled out for criticism in my AEA paper on mathiness, it has become clear that freshwater economists do not share this premise. What I did not anticipate was their assertion that economists do not follow the scientific method, so it is not realistic or relevant to make normative statements of the form “we ought to behave like scientists.”
In a series of three posts that summarize what I have learned since publishing that paper, I will try to stick to positive assertions, that is assertions about the facts, concerning this difference between the premises that freshwater economists take for granted and the premises that I and other economists take for granted.
In my conversations, the freshwater sympathizers generally have not disagreed with my characterization of the facts in assertion #1–that specific freshwater economists did X. In their response, two themes recur:
a) Yes, but everybody does X; that is how the adversarial method works.
b) By selectively expressing disapproval of this behavior by the freshwater economists that you name, you, Paul, are doing something wrong because you are helping “those guys.”
In the rest of this post, I’ll address response a). In a subsequent post, I’ll address response b). Then in a third post, I’ll observe that in my AEA paper, I also criticized a paper by Piketty and Zucman, who are not freshwater economists. The response I heard back from them was very different from the response from the freshwater economists. In short, Piketty and Zucman disagreed with my statement that they did X, but they did not dispute my assertion that X would be wrong because it would be a violation of the scientific method.
Together, the evidence I summarize in these three posts suggests that freshwater economists differ sharply from other economists. This evidence strengthens my belief that the fundamental divide here is between the norms of political discourse and the norms of scientific discourse. Lawyers and politicians both engage in a version of the adversarial method, but they differ in another crucial way. In the suggestive terminology introduced by Jon Haidt in his book The Righteous Mind, lawyers are selfish, but politicians are groupish. What is distinctive about the freshwater economists is that their groupishness depends on a narrow definition of group that sharply separates them from all other economists. One unfortunate result of this narrow groupishness may be that the freshwater economists do not know the facts about how most economists actually behave. ...[continue]...
Wednesday, July 29, 2015
More from Paul Romer on "mathiness" -- this time the use of math in finance to obfuscate communication with regulators:
Using Math to Obfuscate — Observations from Finance: The usual narrative suggests that the new mathematical tools of modern finance were like the wings that Daedalus gave Icarus. The people who put these tools to work soared too high and crashed.
In two posts, here and here, Tim Johnson notes that two government investigations (one in the UK, the other in the US) tell a different tale. People in finance used math to hide what they were doing.
One of the premises I used to take for granted was that an argument presented using math would be more precise than the corresponding argument presented using words. Under this model, words from natural language are more flexible than math. They let us refer to concepts we do not yet fully understand. They are like rough prototypes. Then as our understanding grows, we use math to give words more precise definitions and meanings. ...
I assumed that because I was trying to use math to reason more precisely and to communicate more clearly, everyone would use it the same way. I knew that math, like words, could be used to confuse a reader, but I assumed that all of us who used math operated in a reputational equilibrium where obfuscating would be costly. I expected that in this equilibrium, we would see only the use of math to clarify and lend precision.
Unfortunately, I was wrong even about the equilibrium in the academic world, where mathiness is in fact used to obfuscate. In the world of for-profit finance, the return to obfuscation in communication with regulators is much higher, so there is every reason to expect that mathiness would be used liberally, particularly in mandated disclosures. ...
We should expect that there will be mistakes in math, just as there are mistakes in computer code. We should also expect some inaccuracies in the verbal claims about what the math says. A small number of errors of either type should not be a cause for alarm, particularly if the math is presented transparently so that readers can check the math itself and check whether it aligns with the words. In contrast, either opaque math or ambiguous verbal statements about the math should be grounds for suspicion. ...
Mathiness–exposition characterized by a systematic divergence between what the words say and what the math implies–should be rejected outright.
Sunday, July 19, 2015
This is by David Warsh:
The Rivals, Economic Principals: When Keynes died, in April 1946, The Times of London gave him the best farewell since Nelson after Trafalgar: “To find an economist of comparable influence one would have to go back to Adam Smith.” A few years later, Alvin Hansen, of Harvard University, Keynes’ leading disciple in the United States, wrote , “It may be a little too early to claim that, along with Darwin’s Origin of Species and Marx’s Capital, The General Theory is one of the most significant book which have appeared in the last hundred years. … But… it continues to gain in importance.”
In fact, the influence of Keynes’ book, as opposed to the vision of “macroeconomics” at the heart of it, and the penumbra of fame surrounding it, already had begun its downward arc. Civilians continued to read the book, more for its often sparkling prose than for the clarity of its argument. Among economists, intermediaries and translators had emerged in various communities to explain the insights the great man had sought to convey. Speaking of the group in Cambridge, Massachusetts, Robert Solow put it this way, many years later: “We learned not as much from it – it was…almost unreadable – as from a number of explanatory articles that appeared on all our graduate school reading lists.”
Instead it was another book that ushered in an era of economics very different from the age before. Foundations of Economic Analysis, by Paul A. Samuelson, important parts of it written as much as ten years before, appeared in 1947. “Mathematics is a Language,” proclaimed its frontispiece; equations dominated nearly every page. “It might be still too early to tell how the discoveries of the 1930s would pan out,” Samuelson wrote delicately in the introduction, but their value could be ascertained only by expressing them in mathematical models whose properties could be thoroughly explored and tested. “The laborious literary working-over of essentially simple mathematical concepts such as is characteristic of much of modern economic theory is not only unrewarding from the standpoint of advancing the science, but involves as well mental gymnastics of a particularly depraved type.”
Foundations had won a prize as a dissertation, so Harvard University was required to publish it as a book. In Samuelson’s telling, the department chairman had to be forced to agree to printing a thousand copies, dragged his feet, and then permitted its laboriously hand-set plates to be melted down for other uses after 887 copies were run off. Thus Foundations couldn’t be revised in subsequent printings, until a humbled Harvard University Press republished an “enlarged edition” with a new introduction and a mathematical appendix in 1983. When Samuelson biographer Roger Backhouse went through the various archival records, he concluded that the delay could be explained by production difficulties and recycling of the lead type by postwar exigencies at Press.
It didn’t matter. With the profession, Samuelson soon would win the day.
The “new” economics that he represented – the earliest developments had commenced in the years after World War I – conquered the profession, high and low. The next year Samuelson published an introductory textbook, Economics, to inculcate the young. Macroeconomic theory was to be put to work to damp the business cycle and, especially, avoid the tragedy of another Great Depression. The new approach swiftly attracted a community away from alternative modes of inquiry, in the expectation that it would yield new solutions to the pressing problem of depression-prevention. Alfred Marshall’s Principles of Economics eventually would be swept completely off the table. Foundations was a paradigm in the Kuhnian sense.
At the very zenith of Samuelson’s success, another sort of book appeared, in 1962, A Monetary History of the United States, 1869-1960, by Milton Friedman and Anna Schwartz, published by the National Bureau of Economic Research. At first glance, the two books had nothing to do with one another. A Monetary History harkened back to approaches that had been displaced by Samuelsonian methods – “hypotheses” instead of theorems; charts instead of models, narrative, not econometric analytics. The volume did little to change the language that Samuelson had established. Indeed, economists at the University of Chicago, Friedman’s stronghold, were on the verge of adapting a new, still- higher mathematical style to the general equilibrium approach that Samuelson had pioneered.
Yet one interpretation of the relationship between the price system and the Daedalean wings that A Monetary History contained was sufficiently striking as to reopen a question thought to have been settled. A chapter of their book, “The Great Contraction,” contained an interpretation of the origins of the Great Depression that gradually came to overshadow the rest. As J. Daniel Hammond has written,
The “Great Contraction” marked a watershed in thinking about the greatest economic calamity in modern times. Until Friedman and Schwartz provoked the interest of economists by rehabilitating monetary history and theory, neither economic theorists nor economic historians devoted as much attention to the Depression as historians.
So you could say that some part of the basic agenda of the next fifty years was ordained by the rivalry that began in the hour that Samuelson and Friedman became aware of each other, perhaps in the autumn of 1932, when both turned up the recently-completed Social Science Research Building of the University of Chicago, at the bottom of the Great Depression. Excellent historians, with access to extensive archives, have been working on both men’s lives and work: Hammond, of Wake Forest University, has largely completed his project on Friedman; Backhouse, of the University of Birmingham, is finishing a volume on Samuelson’s early years. Neither author has yet come across a frank recollection by either man of those first few meetings. Let’s hope one or more second-hand accounts turn up in the papers of the many men and women who knew them then. When I asked Friedman about their relationship in 2005, he deferred to his wife, who, somewhat uncomfortably, mentioned a differential in privilege. I lacked the temerity to ask Samuelson directly the last couple of times we talked; he clearly didn’t enjoy discussing it.
Biography is no substitute for history, much less for theory and history of thought, and journalism is, at best, only a provisional substitute for biography. But one way of understanding what happened in economics in the twentieth century is to view it as an argument between Samuelson and Friedman that lasted nearly eighty years, until one aspect of it, at least, was resolved by the financial crisis of 2008. The departments of economics they founded in Cambridge and Chicago, headquarters in the long wars between the Keynesians and the monetarists, came to be the Athens and Sparta of their day. ...[continue reading]...
[There is much, much more in the full post.]
Tuesday, July 07, 2015
Why Germany wants rid of Greece: When I recently visited Berlin, it quickly became clear the extent to which Germany had created a fantasy story about Greece. It was an image of Greeks as a privileged and lazy people, who kept on taking ‘bailouts’ while refusing to do anything to correct their situation. I heard this fantasy from talking to people who were otherwise well informed and knowledgeable about economics.
So powerful has this fantasy become, it is now driving German policy (and policy in a few other countries as well) in totally irrational ways. ... What is driving Germany’s desperate need to rid itself of the Greek problem? ...
Germany is a country where the ideas of Keynes, and therefore mainstream macroeconomics in the rest of the world, are considered profoundly wrong and are described as ‘Anglo-Saxon economics’. Greece then becomes a kind of experiment to see which is right: the German view, or ‘Anglo-Saxon economics’.
The results of the experiment are not to Germany’s liking. ... Confronting this reality has been too much for Germany. So instead it has created its fantasy, a fantasy that allows it to cast its failed experiment to one side, blaming the character of the patient.
The only thing particularly German about this process is the minority status of Keynesian economics within German economic policy advice. In the past I have drawn parallels between what is going on here and the much more universal tendency for poverty to be explained in terms of the personal failings of the poor. These attempts to deflect criticism of economic systems are encouraged by political interests and a media that supports them, as we are currently seeing in the UK. So much easier to pretend that the problems of Greece lie with its people, or culture, or politicians, or its resistance to particular ‘structural reforms’, than to admit that Greece’s real problem is of your making.
Friday, July 03, 2015
Behavioral Economics is Rational After All: There are some deep and interesting issues involved in the debate over behavioral economics. ...
My point here, is that neoclassical economics can absorb the criticisms of the behaviourists without a major shift in its underlying assumptions. The 'anomalies' pointed out by psychologists are completely consistent with maximizing behaviour, as long as we do not impose any assumptions on the form of the utility function defined over goods that are dated and indexed by state of nature.
There is a deeper, more fundamental critique. If we assert that the form of the utility function is influenced by 'persuasion', then we lose the intellectual foundation for much of welfare economics. That is a much more interesting project that requires us to rethink what we mean by individualism. ...
Sunday, June 14, 2015
Dietz Vollrath explains the "mathiness" debate (and also Euler's theorem in a part of the post I left out). Glad he's interpreting Romer -- it's very helpful:
What Assumptions Matter for Growth Theory?: The whole “mathiness” debate that Paul Romer started tumbled onwards this week... I was able to get a little clarity in this whole “price-taking” versus “market power” part of the debate. I’ll circle back to the actual “mathiness” issue at the end of the post.
There are really two questions we are dealing with here. First, do inputs to production earn their marginal product? Second, do the owners of non-rival ideas have market power or not? We can answer the first without having to answer the second.
Just to refresh, a production function tells us that output is determined by some combination of non-rival inputs and rival inputs. Non-rival inputs are things like ideas that can be used by many firms or people at once without limiting the use by others. Think of blueprints. Rival inputs are things that can only be used by one person or firm at a time. Think of nails. The income earned by both rival and non-rival inputs has to add up to total output.
Okay, given all that setup, here are three statements that could be true.
- Output is constant returns to scale in rival inputs
- Non-rival inputs receive some portion of output
- Rival inputs receive output equal to their marginal product
Romer’s argument is that (1) and (2) are true. (1) he asserts through replication arguments, like my example of replicating Earth. (2) he takes as an empirical fact. Therefore, (3) cannot be true. If the owners of non-rival inputs are compensated in any way, then it is necessarily true that rival inputs earn less than their marginal product. Notice that I don’t need to say anything about how the non-rival inputs are compensated here. But if they earn anything, then from Romer’s assumptions the rival inputs cannot be earning their marginal product.
Different authors have made different choices than Romer. McGrattan and Prescott abandoned (1) in favor of (2) and (3). Boldrin and Levine dropped (2) and accepted (1) and (3). Romer’s issue with these papers is that (1) and (2) are clearly true, so writing down a model that abandons one of these assumptions gives you a model that makes no sense in describing growth. ...
The “mathiness” comes from authors trying to elide the fact that they are abandoning (1) or (2). ...
[There's a lot more in the full post. Also, Romer comments on Vollrath here.]
Saturday, June 06, 2015
Seems like much the same can be said about modern macroeconomics (except perhaps the "given the field its credibility" part):
A Crisis at the Edge of Physics, by Adam Frank and Marcelo Gleiser, NY Times: Do physicists need empirical evidence to confirm their theories?
You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.
A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”
Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility. ...
Wednesday, June 03, 2015
This is the introduction to a relatively new working paper by Cidgem Gizem Korpeoglu and Stephen Spear (sent in response to my comment that I've been disappointed with the development of new alternatives to the standard NK-DSGE models):
Coordination Equilibrium and Price Stickiness, by Cidgem Gizem Korpeoglu (University College London) Stephen E. Spear (Carnegie Mellon): 1 Introduction Contemporary macroeconomic theory rests on the three pillars of imperfect competition, nominal price rigidity, and strategic complementarity. Of these three, nominal price rigidity (aka price stickiness) has been the most important. The stickiness of prices is a well-established empirical fact, with early observations about the phenomenon going back to Alfred Marshall. Because the friction of price stickiness cannot occur in markets with perfect competition, modern micro-founded models (New Keynesian or NK models, for short) have been forced to abandon the standard Arrow-Debreu paradigm of perfect competition in favor of models where agents have market power and set market prices for their own goods. Strategic complementarity enters the picture as a mechanism for explaining the kinds of coordination failures that lead to sustained slumps like the Great Depression or the aftermath of the 2008 financial crisis. Early work by Cooper and John laid out the importance of these three features for macroeconomics, and follow-on work by Ball and Romer showed that failure to coordinate on price adjustments could itself generate strategic complementarity, effectively unifying two of the three pillars.
Not surprisingly, the Ball and Romer work was based on earlier work by a number of authors (see Mankiw and Romer's New Keynesian Economics) which used the model of Dixit and Stiglitz of monopolistic competition as the basis for price-setting behavior in a general equilibrium setting, combined with the idea of menu costs -- literally the cost of posting and communicating price changes -- and exogenously-specified adjustment time staggering to provide the friction(s) leading to nominal rigidity. While these models perform well in explaining aspects of the business cycle, they have only recently been subjected to what one would characterize as thorough empirical testing, because of the scarcity of good data on how prices actually change. This has changed in the past decade as new sources of data on price dynamics have become available, and as computational power capable of teasing out what might be called the " fine structure" of these dynamics has emerged. On a different dimension, the overall suitability of monopolistic competition as the appropriate form of market imperfection to use as the foundation of the new macro models has been largely unquestioned, though we believe this is largely due to the tractability of the Dixit-Stiglitz model relative to other models of imperfect competition generated by large fixed costs or increasing returns to scale not due to specialization.
In this paper, we examine both of these underlying assumptions in light of what the new empirics on pricing dynamics has found, and propose a different, and we believe, better microfoundation for New Keynesian macroeconomics based on the Shapley-Shubik market game.
Monday, June 01, 2015
Some snippets from a Justin Fox interview of Richard Thaler:
Question: What are the most valuable things that you got out of studying economics in graduate school?
Answer: The main thing that you learn in grad school, or should learn, is how to think like an economist. The rest is just math. You learn a bunch of models and you learn econometrics -- it’s tools. Some people who are so inclined concentrate on having sharper tools than anybody else, but my tools were kind of dull. So the main thing was just, “How does an economist think about a problem?” And then in my case it was, “That’s really how they think about a problem?” ...
Q: One thing that’s striking -- Kahneman calls it “theory-induced blindness” -- is this sense that if you have a really nice theory, even when someone throws up evidence that the theory doesn’t hold there’s this strong disposition to laugh at the evidence and not even present counterevidence.
A: Because they know you’re wrong. This has happened to me. It still happens. Not as often, but people have what economists would call strong priors. Even our football paper, which may be my favorite of all my papers, we had a hell of a time getting that published,... people saying, “That can’t be right, firms wouldn’t leave that much money on the table.” Economists can think that because none of them have ever worked in firms. Anybody who’s ever been in a large organization realizes that optimizing is not a word that would often be used to describe any large organization. The reason is that it’s full of people, who are complicated. ...
Q: The whole idea of nudging, generally it’s been very popular, but there’s a subgroup of people who react so allergically to it.
A: I think most of the criticism at least here comes from the right. These are choice-preserving policies. Now you’d think they would like those. I have yet to read a criticism that really gets the point that we probably make on the fourth page of the book, which is that there’s no avoiding nudging. Like in a cafeteria: You have to arrange the food somehow. You can’t arrange it at random. That would be a chaotic cafeteria. ...
Paul Krugman says I'm not upbeat enough about the state of macroeconomics:
The Case of the Missing Minsky: Gavyn Davis has a good summary of the recent IMF conference on rethinking macro; Mark Thoma has further thoughts. Thoma in particular is disappointed that there hasn’t been more of a change, decrying
the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.
Maybe surprisingly, I’m a bit more upbeat than either. Of course there are economists, and whole departments, that have learned nothing, and remain wholly dominated by mathiness. But it seems to be that economists have done OK on two of the big three questions raised by the economic crisis. What are these three questions? I’m glad you asked. ...[continue]...
Sunday, May 31, 2015
The beginning of a long discussion from Gavyn Davies:
Has the rethinking of macroeconomic policy been successful?: The great financial crash of 2008 was expected to lead to a fundamental re-thinking of macro-economics, perhaps leading to a profound shift in the mainstream approach to fiscal, monetary and international policy. That is what happened after the 1929 crash and the Great Depression, though it was not until 1936 that the outline of the new orthodoxy appeared in the shape of Keynes’ General Theory. It was another decade or more before a simplified version of Keynes was routinely taught in American university economics classes. The wheels of intellectual change, though profound in retrospect, can grind fairly slowly.
Seven years after 2008 crash, there is relatively little sign of a major transformation in the mainstream macro-economic theory that is used, for example, by most central banks. The “DSGE” (mainly New Keynesian) framework remains the basic workhorse, even though it singularly failed to predict the crash. Economists have been busy adding a more realistic financial sector to the structure of the model , but labour and product markets, the heart of the productive economy, remain largely untouched.
What about macro-economic policy? Here major changes have already been implemented, notably in banking regulation, macro-prudential policy and most importantly the use of the central bank balance sheet as an independent instrument of monetary policy. In these areas, policy-makers have acted well in advance of macro-economic researchers, who have been struggling to catch up. ...
There has been more progress on the theoretical front than I expected, particularly in adding financial sector frictions to the NK-DSGE framework and in overcoming the restrictions imposed by the representative agent model. At the same time, there has been less progress than I expected in developing alternatives to the standard models. As far as I can tell, a serious challenge to the standard model has not yet appeared. My biggest disappointment is how much resistance there has been to the idea that we need to even try to find alternative modeling structures that might do better than those in use now, and the arrogance that asserts that we have little to learn about theory or policy from the economists who wrote during and after the Great Depression.
Tuesday, May 19, 2015
The most misleading definition in economics (draft excerpt from Economics in Two Lessons), by John Quiggin: After a couple of preliminary posts, here goes with my first draft excerpt from my planned book on Economics in Two Lessons. They won’t be in any particular order, just tossed up for comment when I think I have something that might interest readers here. To remind you, the core idea of the book is that of discussing all of economic policy in terms of “opportunity cost”. My first snippet is about
The situation where there is no way to make some people better off without making anyone worse off is often referred to as “Pareto optimal” after the Italian economist and political theorist Vilfredo Pareto, who developed the underlying concept. “Pareto optimal” is arguably, the most misleading term in economics (and there are plenty of contenders). ...
Describing a situation as “optimal” implies that it is the unique best outcome. As we shall see this is not the case. Pareto, and followers like Hazlitt, seek to claim unique social desirability for market outcomes by definition rather than demonstration. ...
If that were true, then only the market outcome associated with the existing distribution of property rights would be Pareto optimal. Hazlitt, like many subsequent free market advocates, implicitly assumes that this is the case. In reality, though there are infinitely many possible allocations of property rights, and infinitely many allocations of goods and services that meet the definition of “Pareto optimality”. A highly egalitarian allocation can be Pareto optimal. So can any allocation where one person has all the wealth and everyone else is reduced to a bare subsistence. ...
Sunday, May 17, 2015
Blaming Keynes: A few people have asked me to respond to this FT piece from Niall Ferguson. I was reluctant to, because it is really just a bit of triumphalist Tory tosh. That such things get published in the Financial Times is unfortunate but I’m afraid not surprising in this case. However I want to write later about something else that made reference to it, so saying a few things here first might be useful.
The most important point concerns style. This is not the kind of thing an academic should want to write. It makes no attempt to be true to evidence, and just cherry picks numbers to support its argument. I know a small number of academics think they can drop their normal standards when it comes to writing political propaganda, but I think they are wrong to do so. ...
Paul Romer continues his assault on "mathiness":
Ed Prescott is No Robert Solow, No Gary Becker: In his comment on my Mathiness paper, Noah Smith asks for more evidence that the theory in the McGrattan-Prescott paper that I cite is any worse than the theory I compare it to by Robert Solow and Gary Becker. I agree with Brad DeLong’s defense of the Solow model. I’ll elaborate, by using the familiar analogy that theory is to the world as a map is to terrain.
There is no such thing as the perfect map. This does not mean that the incoherent scribbling of McGrattan and Prescott are on a par with the coherent, low-resolution Solow map that is so simple that all economists have memorized it. Nor with the Becker map that has become part of the everyday mental model of people inside and outside of economics.
Noah also notes that I go into more detail about the problems in the Lucas and Moll (2014) paper. Just to be clear, this is not because it is worse than the papers by McGrattan and Prescott or Boldrin and Levine. Honestly, I’d be hard pressed to say which is the worst. They all display the sloppy mixture of words and symbols that I’m calling mathiness. Each is awful in its own special way.
What should worry economists is the pattern, not any one of these papers. And our response. Why do we seem resigned to tolerating papers like this? What cumulative harm are they doing?
The resignation is why I conjectured that we are stuck in a lemons equilibrium in the market for mathematical theory. Noah’s jaded question–Is the theory of McGrattan-Prescott really any worse than the theory of Solow and Becker?–may be indicative of what many economists feel after years of being bullied by bad theory. And as I note in the paper, this resignation may be why empirically minded economists like Piketty and Zucman stay as far away from theory as possible. ...
[He goes on to give more details using examples from the papers.]