Brad DeLong wonders why Cliff Asness in clinging to a theoretical model that has clearly been rejected by the data:
... What is not normal is to claim that your analysis back in 2010 that quantitative easing was generating major risks of inflation was dead-on.
What is not normal is to adopt the mental pose that your version of classical austerian economics cannot fail--that it can only be failed by an uncooperative and misbehaving world.
What is not normal is, after 4 1/2 years, in a week, a month, a six-month period in which market expectations of long-run future inflation continue on a downward trajectory, to refuse to mark your beliefs to market and demand that the market mark its beliefs to you. To still refuse to bring your mind into agreement with reality and demand that reality bring itself into agreement with your mind. To still refuse to say: "my intellectual adversaries back in 2010 had a definite point" and to say only: "IT'S NOT OVER YET!!!!"
There's a version of this in econometrics, i.e. you know the model is correct, you are just having trouble finding evidence for it. It goes as follows. You are testing a theory you came up with, but the data are uncooperative and say you are wrong. But instead of accepting that, you tell yourself "My theory is right, I just haven't found the right econometric specification yet. I need to add variables, remove variables, take a log, add an interaction, square a term, do a different correction for misspecification, try a different sample period, etc., etc., etc." Then, after finally digging out that one specification of the econometric model that confirms your hypothesis, you declare victory, write it up, and send it off (somehow never mentioning the intense specification mining that produced the result).
Too much econometric work proceeds along these lines. Not quite this blatantly, but that is, in effect, what happens in too many cases. I think it is often best to think of econometric results as the best case the researcher could make for a particular theory rather than a true test of the model.