Category Archive for: Econometrics [Return to Main]

Thursday, March 07, 2013

'The War On Entitlements'

Dean Baker's blog is called "Beat the Press," but he praised this effort (the original is quite a bit longer, and makes additional points):

The War On Entitlements, by Thomas Edsall, Commentary, NY Times: ...Currently, earned income in excess of $113,700 is entirely exempt from the 6.2 percent payroll tax that funds Social Security benefits... Simply by eliminating the payroll tax earnings cap — and thus ending this regressive exemption for the top 5.2 percent of earners — would, according to the Congressional Budget Office, solve the financial crisis facing the Social Security system.
So why don’t we talk about raising or eliminating the cap – a measure that has strong popular, though not elite, support? ... The Washington cognoscenti are more inclined to discuss two main approaches...: means-testing of benefits and raising the age of eligibility for Social Security and Medicare. ... Means-testing and raising the age of eligibility as methods of cutting spending appeal to ideological conservatives for a number of reasons.
First, insofar as benefits for the affluent are reduced or eliminated under means-testing, social insurance programs are no longer universal and are seen, instead, as a form of welfare. Public support would almost certainly decline, encouraging further cuts in the future. Second, the focus on means-testing and raising the age of eligibility diverts attention from a much simpler and more equitable approach: raising the payroll tax to apply to the earnings of the well-to-do, a step strongly opposed by the ideological right. ... Third, and most important in terms of the policy debate, while both means-testing and eliminating the $113,700 cap on earnings subject to the payroll tax hurt the affluent, the latter would inflict twice as much pain. ...
Theda Skocpol ... of ... Harvard and an authority on the history of the American welfare state contended ... that policy elites avoid addressing the sharply regressive nature of social welfare taxes because, “at one level, it’s very, very privileged people wanting to make sure they cut spending on everybody else” while “holding down their own taxes.” ...

Wednesday, July 11, 2012

'Visualizing Economic Uncertainty: On the Soyer-Hogarth Experiment'

Stephen Ziliak, via email:

Dear Mark,
Does graphing improve prediction and increase understanding of uncertainty? When making economic forecasts, are scatter plots better than t-statistics, p-values, and other commonly required regression output?
A recent paper by Emre Soyer and Robin Hogarth suggests the answers are yes, that in fact we are far better forecasters when staring at plots of data than we are when dishing out – as academic journals normally do – tables of statistical significance. [Here is a downloadable version of the Soyer-Hogarth article.]
The Illusion of Predictability: How Regression Statistics Mislead Experts” was published by Soyer and Hogarth in a symposium of the International Journal of Forecasting (vol. 28, no. 3, July 2012). The symposium includes published comments by J. Scott Armstrong, Daniel Goldstein, Keith Ord, N. Nicholas Taleb, and me, together with a reply from Soyer and Hogarth.
Soyer and Hogarth performed an experiment on the forecasting ability of more than 200 well-published econometricians worldwide to test their ability to predict economic outcomes using conventional outputs of linear regression analysis: standard errors, t-statistics, and R-squared.
The chief finding of the Soyer-Hogarth experiment is that the expert econometricians themselves—our best number crunchers—make better predictions when only graphical information—such as a scatter plot and theoretical linear regression line—is provided to them. Give them t-statistics and fits of R-squared for the same data and regression model and their forecasting ability declines. Give them only t-statistics and fits of R-squared and predictions fall from bad to worse.
It’s a finding that hits you between the eyes, or should. R-squared, the primary indicator of model fit, and t-statistic, the primary indicator of coefficient fit, are in the leading journals of economics - such as the AER, QJE, JPE, and RES - evidently doing more harm than good.
Soyer and Hogarth find that conventional presentation mode actually damages inferences from models. This harms decision-making by reducing the econometrician’s (and profit seeker’s) understanding of the total error of the experiment—or of what might be called the real standard error of the regression, where “real” is defined as the sum (in percentage terms, say) of both systematic and random sources of uncertainty in the whole model. If Soyer and Hogarth are correct, academic journals should allocate more space to visual plots of data and less to tables of statistical significance.
In the blogosphere the statistician Andrew Gelman, INET’s Robert Johnson, and journalists Justin Fox (Harvard Business Review) and Felix Salmon (Reuters) have commented favorably on Soyer's and Hogarth's striking results.
But historians of economics and statistics, joined by scientists in other fields – engineering and physics, for example – will not be surprised by the power of visualizing uncertainty. As I explain in my published comment, Karl Pearson himself—a founding father of English-language statistics—tried beginning in the 1890s to make “graphing” the foundation of statistical method. Leading economists of the day such as Francis Edgeworth and Alfred Marshall sympathized strongly with the visual approach.
And as Keynes (1937, QJE) observed, in economics “there is often no scientific basis on which to form any calculable probability whatever. We simply do not know.” Examples of variables we do not know well enough to forecast include, he said, “the obsolescence of a new invention”, “the price of copper” and “the rate of interest twenty years hence” (Keynes, p. 214).
That sounds about right - despite currently fashionable claims about the role of statistical significance in finding a Higgs boson. Unfortunately, Soyer and Hogarth did not include time series forecasting in their novel experiment though in future work I suspect they and others will.
But with extremely powerful, dynamic, and high-dimensional visualization software such as “GGobi” – which works with R and is currently available for free on-line - economists can join engineers and rocket scientists and do a lot more gazing at data than we currently do (http://www.ggobi.org).
At least, that is, if our goal is to improve decisions and to identify relationships that hit us between the eyes.
Kind regards,
Stephen T. Ziliak
Professor of Economics
Roosevelt University

Tuesday, July 10, 2012

Fed Watch: Careful With That HP Filter

Tim Duy:

Careful With That HP Filter, by Tim Duyreads Marcus Nunes' David Glasner's review of Stephen Williamson and concludes:

Marcus Nunes, I think properly, concludes that Williamson’s graph is wrong, because Williamson ignores the fact that there was a rising trend of NGDP during the 1970s, while during the Great Moderation, NGDP was stationary... Furthermore, Scott Sumner questions whether the application of the Hodrick-Prescott filter to the entire 1947-2011 period was appropriate, given the collapse of NGDP after 2008, thereby distorting estimates of the trend…

Williamson is here, Sumner's reply is here. DeLong jogged my memory that this topic is trapped in my computer, and this seems a good time to get it out of there.

First off, I am very cautious about mixing pre- and post-1985 data because of the impact of the Great Moderation on business cylce dynamics. This applies to Jim Hamilton's reply to my thoughts about the positive impact from housing. Hamilton points out that prior to the Great Moderation, housing would make significant contributions to GDP growth as the economy jumped back to trend. True enough; Hamilton might prove correct. But I would add that large contributions prior to 1985 would typically come in the early stages of the business cycle. I don't think the same kinds of cycles are currently at play, and that it is a little late to be expecting a V-shaped boost from housing.

As to the issue of the HP filter, this was on my radar because St. Louis Federal Reserve President James Bullard likes to rely on this technique to support his claim that the US economy is operating near potential. As he said today:

The housing bubble and the ensuing financial crisis probably did some lasting damage to the economy, suggesting that the output gap in the U.S. is not as large as commonly believed and that the growth rate of potential output is modest. This helps explain why U.S. growth continues to be sluggish, why U.S. inflation has remained close to target instead of dropping precipitously and why U.S. unemployment has fallen over the last year—from a level of 9.1 percent in June 2011 to 8.2 percent in June 2012.

I think there is more wrong than right in these two sentences. I don't see how a slower rate of potential growth necessarily implies lower actual growth in the short run. Clearly we have many instances of both above and below trend growth over the years. The failure of inflation to fall further can easily be explained by nominal wage rigidities. And the drop in the unemployment rate, in itself not impressive, should be taken in context with the stagnation of the labor force participation rate.

Bullard likes to rely on this chart as support:

Bullard

For some reason, Bullard rejects entirely CBO estimates of potential output, which would reveal a smaller output gap then his linear trend decomposition. My version of this chart:

Bullard2

To deal with the endpoint problem, I used a GDP forecast from an ARIMA(1,1,1) model to extend the data beyond 2012:1. If you don't deal with the endpoint problem, you get this:

Bullard4

I believe most people would believe this result (that output is solidly above potential) to be a nonsensical. By itself, the issue of dealing with the endpoint problem should raise red flags about using the HP filter to draw policy conclusions about recent economic dynamics.
Relatedly, notice that the HP filter reveals a period of substantial above trend growth through the middle of 2008. This should be a red flag for Bullard. If he wants to argue that steady inflation now implies that growth is close to potential, he needs to explain why inflation wasn't skyrocketing in 2005. Or 2006. Or 2007. Most importantly, we should have seen the rise in headline inflation confirmed by core-inflation. The record:

Inf

Core-inflation remained remarkably well-behave for an economy operating so far above potential, don't you think?
At issue is the tendency of the HP filter to generate revisionist history. Consider the view of the world using data through 2007:4:

Bullard3

Suddenly, the output gap disappears almost entirely in 2005. And 2006. And 2007. Which is much more consistent with the inflation story during that period.
Bottom Line: Use the HP filter with great caution, especially around large shocks. Such shocks will distort your estimates of the underlying trends, both before and after the shock.

Monday, June 25, 2012

An Overview of VAR Modeling

Some of you might be interested in this:

An Overview of VAR Modelling, by Dave Giles: ...my various posts on different aspects of VAR modelling have been quite popular. Many followers of this blog will therefore be interested in a recent working paper by Helmut Luetkephol. The paper is simply titled, "Vector Autoregressive Models", and it provides an excellent overview by one of the leading figures in the field.
You can download the paper from here.

Wednesday, May 16, 2012

Economic Behavior: The Effects of Individual Genetic Variants are Tiny

There was recently some discussion of "genoeconomics":

...the idea that genes had an important role to play in decision-making was largely abandoned in the world of economics. But with the completion of the Human Genome Project in 2000, the first full sequence of a human being’s genetic code, people started wondering if perhaps it would be possible to push past broad heritability estimates ... and figure out what part of a person’s genome influenced what aspect of his behavior.

However, Cornell economist Daniel Benjamin argues that the ability of genetic factors to explain individual variation in economic and polilitical behavior is "likely to be very small" (genetic data "taken as a whole" may have some predictive power, but "molecular genetic data has essentially no predictive power"):

New evidence that many genes of small effect influence economic decisions and political attitudes, EurekAlert: Genetic factors explain some of the variation in a wide range of people's political attitudes and economic decisions – such as preferences toward environmental policy and financial risk taking – but most associations with specific genetic variants are likely to be very small, according to a new study led by Cornell University economics professor Daniel Benjamin.
The research team arrived at the conclusion after studying a sample of about 3,000 subjects with comprehensive genetic data and information on economic and political preferences. The researchers report their findings in "The Genetic Architecture of Economic and Political Preferences," published by the Proceedings of the National Academy of Sciences Online Early Edition, May 7, 2012.
The study showed that unrelated people who happen to be more similar genetically also have more similar attitudes and preferences. This finding suggests that genetic data - taken as a whole – could eventually be moderately predictive of economic and political preferences. The study also found evidence that the effects of individual genetic variants are tiny, and these variants are scattered across the genome. Given what is currently known, the molecular genetic data has essentially no predictive power for the 10 traits studied, which included preferences toward environmental policy, foreign affairs, financial risk and economic fairness.
This conclusion is at odds with dozens of previous papers that have reported large genetic associations with such traits, but the present study included ten times more participants than the previous studies.
"An implication of our findings is that most published associations with political and economic outcomes are probably false positives. These studies are implicitly based on the incorrect assumption that there are common genetic variants with large effects," said Benjamin. "If you want to find genetic variants that account for some of the differences between people in their economic and political behavior, you need samples an order of magnitude larger than those presently used," he added.
The research team concluded that it may be more productive in future research to focus on behaviors that are more closely linked to specific biological systems, such as nicotine addiction, obesity, and emotional reactivity, and are therefore likely to have stronger associations with specific genetic variants.

Tuesday, May 15, 2012

Links for 2012-05-15