From an interview of Jonathn Parker (the Robert C. Merton Professor of Finance at the Massachusetts Institute of Technology's Sloan School of Management) by the Richmond Fed:
...EF: In the paper, you consider some evolutionary arguments related to optimal expectations.
Parker: We were asked in the review process to think about why this might occur or how people might come to have these beliefs. In the paper, we have a couple of paragraphs that I think I'm still a little uncomfortable with, but the arguments run something like this: How might you get to optimal beliefs? You start out with an optimistic assessment of how easy something will be, and then you kind of think about it a little bit and if the costs of being wrong are significant, you start to downgrade your optimism. However, if you don't see any big costs, then you don't. So it suggests you approach decisions with natural optimism and then consider the consequences, and you bring beliefs back toward reality if you need to. In terms of evolution, people do lots of matching with friends, with colleagues, with potential spouses in which they project confidence about the value of matching with them, of working with them, of marrying them — and a credible, stronger belief in themselves may be useful in that process. That's not in the theory, per se, but these are stories that might help us (or a referee) believe that there's something there.
I think it's worth noting that one of the reasons I think this paper has been controversial (at least relative to our belief in the theory!) — it has gotten good citations, but it hasn't led to a lot of subsequent literature — is that it is a behavioral paper that contradicts a common belief among behavioral economists that the mistakes people make are potentially very large. Our model delivers exactly the reverse, which is that the mistakes people make are the ones that satisfy or generate these biases but do not cause or risk large negative payoffs. So it's behavioral economics that the behavioral economists don't like.
I don't think that every theory has to explain every behavior, but I also think our theory can incorporate situations in which there's an awfully large belief bias or in which things are extreme if one moves away from the particular frictionless, stationary, full-information environment that we studied for biases. It might be that there are explicit costs associated with moving beliefs away from rationality. This approach might also make the model more, not less, empirically useful. And the way we worked with optimal expectations theory, people are meta-smart — they know the true probabilities and work from those to these biased probabilities. There are situations where people may really not understand the truth at all.
Let me come at this a slightly different way. There's a set of behavioral models in which there is a belief bias and it is invariant to the costs and payoffs. And you see that pretty clearly rejected, I think, in the world and in labs. So our paper gets something right in terms of biases being disciplined by costs. The models in which biases are fixed and not responsive have the problem that people can be turned into money pumps and can make very severe errors in certain, regularly occurring states of the world. Some economists are very comfortable with the idea that people do regularly make major mistakes. Our paper lets people optimally tone down their optimistic bias and so rules out regular, really costly mistakes. But some people find that a bug and not a feature. ...