« "I Blame the Wine" | Main | Rogoff: Embracing Inflation »

Tuesday, December 02, 2008

"Frequentists vs Bayesians"

Steve Hsu:

Frequentists vs Bayesians, by Steve Hsu: Noted Berkeley statistician David Freedman recently passed away. I recommend the essay below if you are interested in the argument between frequentists (objectivists) and Bayesians (subjectivists). I never knew Freedman, but based on his writings I think I would have liked him very much -- he was clearly an independent thinker :-)

In everyday life I tend to be sympathetic to the Bayesian point of view, but as a physicist I am willing to entertain the possibility of true quantum randomness.

I wish I understood better some of the foundational questions mentioned below. In the limit of infinite data will two Bayesians always agree, regardless of priors? Are exceptions contrived?

Some issues in the foundation of statistics Abstract: After sketching the conflict between objectivists and subjectivists on the foundations of statistics, this paper discusses an issue facing statisticians of both schools, namely, model validation. Statistical models originate in the study of games of chance, and have been successfully applied in the physical and life sciences. However, there are basic problems in applying the models to social phenomena; some of the difficulties will be pointed out. Hooke’s law will be contrasted with regression models for salary discrimination, the latter being a fairly typical application in the social sciences.

...The subjectivist position seems to be internally consistent, and fairly immune to logical attack from the outside. Perhaps as a result, scholars of that school have been quite energetic in pointing out the flaws in the objectivist position. From an applied perspective, however, the subjectivist position is not free of difficulties. What are subjective degrees of belief, where do they come from, and why can they be quantified? No convincing answers have been produced. At a more practical level, a Bayesian’s opinion may be of great interest to himself, and he is surely free to develop it in any way that pleases him; but why should the results carry any weight for others? To answer the last question, Bayesians often cite theorems showing "inter-subjective agreement:" under certain circumstances, as more and more data become available, two Bayesians will come to agree: the data swamp the prior. Of course, other theorems show that the prior swamps the data, even when the size of the data set grows without bounds-- particularly in complex, high-dimensional situations. (For a review, see Diaconis and Freedman, 1986.) Theorems do not settle the issue, especially for those who are not Bayesians to start with.

My own experience suggests that neither decision-makers nor their statisticians do in fact have prior probabilities. A large part of Bayesian statistics is about what you would do if you had a prior.7 For the rest, statisticians make up priors that are mathematically convenient or attractive. Once used, priors become familiar; therefore, they come to be accepted as "natural" and are liable to be used again; such priors may eventually generate their own technical literature. ...

It is often urged that to be rational is to be Bayesian. Indeed, there are elaborate axiom systems about preference orderings, acts, consequences, and states of nature, whose conclusion is-- that you are a Bayesian. The empirical evidence shows, fairly clearly, that those axioms do not describe human behavior at all well. The theory is not descriptive; people do not have stable, coherent prior probabilities.

Now the argument shifts to the "normative:" if you were rational, you would obey the axioms, and be a Bayesian. This, however, assumes what must be proved. Why would a rational person obey those axioms? The axioms represent decision problems in schematic and highly stylized ways. Therefore, as I see it, the theory addresses only limited aspects of rationality. Some Bayesians have tried to win this argument on the cheap: to be rational is, by definition, to obey their axioms. ...

How do we learn from experience? What makes us think that the future will be like the past? With contemporary modeling techniques, such questions are easily answered-- in form if not in substance.

·The objectivist invents a regression model for the data, and assumes the error terms to be independent and identically distributed; "iid" is the conventional abbreviation. It is this assumption of iid-ness that enables us to predict data we have not seen from a training sample-- without doing the hard work of validating the model.

·The classical subjectivist invents a regression model for the data, assumes iid errors, and then makes up a prior for unknown parameters.

·The radical subjectivist adopts an exchangeable or partially exchangeable prior, and calls you irrational or incoherent (or both) for not following suit.

In our days, serious arguments have been made from data. Beautiful, delicate theorems have been proved; although the connection with data analysis often remains to be established. And an enormous amount of fiction has been produced, masquerading as rigorous science. [!!!]

    Posted by on Tuesday, December 2, 2008 at 11:52 AM in Economics, Methodology | Permalink  TrackBack (0)  Comments (15)

    TrackBack

    TrackBack URL for this entry:
    https://www.typepad.com/services/trackback/6a00d83451b33869e201053628bb71970b

    Listed below are links to weblogs that reference "Frequentists vs Bayesians":


    Comments

    Feed You can follow this conversation by subscribing to the comment feed for this post.