Final Exam for Winter 2006 (there is no solution for this one).

Final Exam for Winter 2006 (there is no solution for this one).

Posted by Mark Thoma on Sunday, February 24, 2008 at 07:50 PM in Finals, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

I am grading the projects as fast as I can without cutting corners and hope to have them done today sometime. That means I will turn grades in for sure by tomorrow, maybe even tonight but I can't promise that.

I am encouraged in reading the projects at how many of you seemed to "get it" while doing these. Time and again people are describing these as a useful learning experience, more so than I would have guessed, and it is showing up in how the projects came out. I know there were frustrations, but I was pleased to see how many of you worked your way through them.

Thanks for putting in the effort.

Posted by Mark Thoma on Saturday, March 25, 2006 at 03:41 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

David Altig's post at macroblog on the relationship between shocks to the producer price index (PPI) and changes in the (CPI) piqued my interest [originally here]:

macroblog: The February PPI -- Hot Or Not?: ...A few years back, Jonathon Weinhagen took a look at what we think we know about the relationship between producer prices and broader measures of consumer prices. If terms like "VAR", "variance decompositions", and "impulse responses" mean something to you, you may want to take a look at his article, which appeared in the November 2002 edition of the Monthly Labor Review. If that doesn't sound too interesting to you, here is what Jonathon concluded:

Several authors have investigated the causal relationship between commodity prices and consumer inflation... The common finding in the majority of these studies was that the power of commodity prices to predict CPI inflation has diminished since the 1980s...

To take a quick look at this issue, I downloaded data on real GDP, the federal funds rate, the CPI for all items, the core CPI (less food and energy), the all item PPI, the crude goods PPI, the intermediate goods PPI, and the finished goods PPI from the St. Louis Fed web site (FRED). All data except the federal funds are logged, and the PPI and CPI indices are differenced to obtain inflation rates. Real GDP enters in levels, but using differences does not change the results meaningfully.

These data are used to estimate a VAR model. For those who are unfamiliar with these models, they are general reduced form models of the form:

where *L* is the number of lags; 2 lags are used here. These
models are able, with some assumptions about the underlying structure
that aren't apparent from these equations (on that issue, this uses a
Choleski decomposition and the ordering is as shown), to show how each
variable in the system responds to a shock to other variables. Various
definitions of the CPI (all items and core) and the PPI (all items,
crude, intermediate, and finished goods) are used. Here are the results
showing how the CPI variously defined responds to shocks to each of the
definitions of the PPI. The horizontal axis shows the number of
quarters after the shock. The graphs are called impulse response
functions:

The main result, at least in this particular specification of the empirical model, is that both core and overall inflation (as measured by changes in the CPI or core CPI) are least responsive to shocks to inflation in crude materials prices. In addition, the response of core inflation to shocks to input price inflation is more persistent than the response of overall inflation. The paper David cites notes differences in the results by sample period, and the results shown here are for the entire available sample, 1959:Q1 to 2005:Q4 with allowances for lags, so sub-sample results may show differences. This does, however, agree with the basic result from the paper that shocks to input prices at earlier stages in processing, in this case crude materials, have a smaller impact on output prices than shocks to prices at later stages of production.

Posted by Mark Thoma on Wednesday, March 22, 2006 at 09:50 PM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

I will be in my office from 1-2:30 tomorrow (Sunday) to answer questions.

Posted by Mark Thoma on Saturday, March 18, 2006 at 01:58 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

[pdf file]

**Midterm 1 will cover the following topics:**

- The two uses of regression models
- The assumptions underlying the Guass-Markov Theorem
- The Guass-Markov Theorem and what BLUE means
- Reasons for an error in a regression model
- How t-tests and z-tests differ (when to use each one)
- Single hypothesis tests: z-tests and t-tests
- Multiple hypothesis tests: F-tests and Chi-Square tests
- [Should be able to do hypothesis tests with these four distributions]
- Functional form: changes in units
- Functional form: log-linear, semi-log, reciprocal, and log reciprocal models (see page 190 for a summary).
- Use of dummy variables in regression models
- Dummy variable trap
- Chow tests and dummy variables
- Piecewise linear regression
- Model selection criteria
- Types of specification errors
- Consequences of omitting a relevant variable
- Consequences of including an irrelevant variable
- LM test for adding variables to a regression model
- How to do an AIC test
- Consequences of errors in measuring y
- Consequences of errors in measuring the x variables.
- Perfect and imperfect multicollinearity
- Consequences of multicollinearity
- Detection of multicollinearity
- What to do about multicollinearity

**Here's an outline of the material for the second exam:**

Heteroskedasticity (Chapter 11)

- How it is defined
- How it might arise
- Effect on estimator, test statistics, etc. if OLS used
- Testing for Heteroskedasticity

- Graphs (suggestive only)
- La Grange Multiplier tests

- (i) Breusch-pagan
- (ii) Glesjer
- (iii) Park
- Goldfeld-Quandt test
- White’s test (recommended if N large enough)

- Estimation procedures

- White’s correction
- GLS
- FGLS (Feasible GLS)

- (i) Known proportional factor
- (ii) Breusch-pagan
- (iii) Glesjer
- (iv) Park

Autocorrelation (Chapter 12)

- How it is defined and how it expresses itself in a regression model

- Show how the corr(u
_{t }, u_{t-s}) changes with s- How serial correlation might arise
- Effect on estimator, test statistics, etc. if OLS used
- Testing for autocorrelation

- Graphs (suggestive only)
- Durbin-Watson Test

- Show statistic is between 0 and 4
- Advantages and disadvantages relative to Breusch-Godfrey
- Breusch-Godfrey LM test

- Estimation procedures

- GLS
- FGLS (Feasible GLS)

- Cochrane-Orcutt
- Grid search
- Note: I didn't quite finish this section and will add more after the exam.

**Material since Exam #2**

Chapter 12 [cont.]

- Use of lagged dependent variables to solve the autocorrelation problem

Simultaneous Equation Models (Chapter 18)

- Simultaneous equation models

- endogenous, exogenous, and predetermined variables
- structural models and reduced forms
- simultaneous equation bias

Identification (Chapter 19)

- Explaining identification intuitively with, say, a supply and demand model
- Under, exact, and overidentification
- Rules for identification

- Order condition explicitly
- Rank condition intuitively

Estimation of Simultaneous Equation Models (Chapter 20)

- Indirect least squares (ILS)

- Exact identification (works)
- Over or under identified (won't work)
- Two-stage least squares (2SLS)

- Exact or over identified (works)
~~Over or~~under identification (won't work)- Stage 1
- Stage 2
- Standard errors

Posted by Mark Thoma on Friday, March 17, 2006 at 03:09 AM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finish chapter 20

**Next Time**:

Review

Posted by Mark Thoma on Monday, March 13, 2006 at 07:43 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

I thought some of you might want to read this. It's Paul Krugman's introduction to Keynes' *General Theory of Employment,
Interest, and Money*:

Introduction by Paul Krugman toThe General Theory of Employment, Interest, and Money, by John Maynard Keynes

SYNOPSIS:

IntroductionIn the spring of 2005 a panel of “conservative scholars and policy leaders” was asked to identify the most dangerous books of the 19th and 20th centuries. You can get a sense of the panel’s leanings by the fact that both Charles Darwin and Betty Friedan ranked high on the list. But

The General Theory of Employment, Interest, and Moneydid very well, too. In fact, John Maynard Keynes beat out V.I. Lenin and Frantz Fanon. Keynes, who declared in the book’s oft-quoted conclusion that “soon or late, it is ideas, not vested interests, which are dangerous for good or evil,” [384] would probably have been pleased.Over the past 70 years

The General Theoryhas shaped the views even of those who haven’t heard of it, or who believe they disagree with it. A businessman who warns that falling confidence poses risks for the economy is a Keynesian, whether he knows it or not. A politician who promises that his tax cuts will create jobs by putting spending money in peoples’ pockets is a Keynesian, even if he claims to abhor the doctrine. Even self-proclaimed supply-side economists, who claim to have refuted Keynes, fall back on unmistakably Keynesian stories to explain why the economy turned down in a given year.In this introduction I’ll address five issues concerning

The General Theory. First is the book’s message – something that ought to be clear from the book itself, but which has often been obscured by those who project their fears or hopes onto Keynes. Second is the question of how Keynes did it: why did he succeed, where others had failed, in convincing the world to accept economic heresy? Third is the question of how much ofThe General Theoryremains in today’s macroeconomics: are we all Keynesians now, or have we either superseded Keynes’s legacy, or, some say, betrayed it? Fourth is the question of what Keynes missed, and why. Finally, I’ll talk about how Keynes changed economics, and the world.

Continue reading "Paul Krugman's Introduction to Keynes' General Theory" »

Posted by Mark Thoma on Friday, March 10, 2006 at 10:27 PM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Now that you have done all of this, how should you write up the results? Since we are looking for competent application of the techniques discussed in the class, organizing around the class structure will help us:

**1. Introduction**

Introduce the problem and discuss the question you are trying to answer with your empirical project.

**2. Theory and Hypotheses**

Discuss the theoretical model and state the hypotheses you are going to test. You should state significance levels you will use as well.

**3. Empirical Model and Data**

Present the empirical model you are using to test your hypotheses. This is
where all the specification issues we talked about should be addressed. For
example, did you log your data? Are there any important omitted variables? What
is the consequence? Did you use tests to see if variables you weren’t sure about
belong in the model? Did you use dummies? Are measurement errors a problem? For
a list of these issues, see
__List of Study Topics for Exam__. Discuss
the data and data sources also.

**4. Violations of Assumptions**

At this point, you have a basic empirical model specified and you should now worry about violations of the Guass-Markov conditions. The goal is to test for, and then either correct for the problem if it exists or describe how you would have corrected the model had you found a problem. The problems to cover are multicollinearity, heteroskedasticity, and autocorrelation, see here for the list.

**5. Results**

After specifying the model and checking to see if problems exist, you are now you are ready to present estimates of your final model. After presenting the estimates, you should interpret the coefficients. What do they tell you? This is also the section where you should present the tests and test results for the hypotheses you are examining.

**6. Conclusion**

What did you learn? Did the data support your hypotheses? What would you do next to follow up?

**UPDATE**: One final step:

**7. Data and Programs**

In an appendix to the paper, please include a copy of the data you used. Try to fit it into two pages or less, and feel free to use smaller fonts to make it fit (please do). If you cannot do that because it is too large, and only then, send me an email with an attachment that includes the data set before class on the day the project is due. Make the subject of the email: Data for Empirical Project. I will sort on this subject to find them. ~~In addition, please include the programs you used to run regressions, tests, etc.~~

Posted by Mark Thoma on Wednesday, March 08, 2006 at 07:58 PM in Empirical Project, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finish chapter 19, begin chapter 20

**Next Time**:

Finish chapter 20

Posted by Mark Thoma on Wednesday, March 08, 2006 at 07:42 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Posted by Mark Thoma on Wednesday, March 08, 2006 at 12:35 PM in Midterms, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

I think it would be a good idea to extend the due date on the last homework until Friday of week 10 (3/17/06) at 5:00 p.m. so that you have time after the paper is due to work on it.

**Homework 8 is due Friday, March 17 by 5:00 p.m. at my office**. If I'm not there, slide the project under my door. Of course, you can turn it in anytime before that as well. The office is PLC 471.

Posted by Mark Thoma on Wednesday, March 08, 2006 at 01:17 AM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finish chapter 18, begin chapter 19

**Next Time**:

Finish chapter 19, begin chapter 20

Posted by Mark Thoma on Monday, March 06, 2006 at 10:23 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

99 | A |

98 | A |

98 | A |

96 | A |

96 | A |

95 | A |

94 | A |

93 | A |

91 | A- |

91 | A- |

90 | A- |

89 | A- |

88 | A- |

88 | A- |

88 | A- |

88 | A- |

85 | B+ |

84 | B+ |

84 | B+ |

82 | B+ |

81 | B+ |

81 | B+ |

80 | B+ |

80 | B+ |

79 | B |

79 | B |

78 | B |

77 | B |

77 | B |

77 | B |

76 | B |

76 | B |

75 | B |

74 | B |

73 | B |

71 | B- |

69 | B- |

68 | B- |

68 | B- |

67 | B- |

67 | B- |

67 | B- |

67 | B- |

66 | C+ |

65 | C+ |

65 | C+ |

65 | C+ |

64 | C+ |

63 | C+ |

62 | C+ |

62 | C+ |

61 | C |

60 | C |

60 | C |

60 | C |

60 | C |

60 | C |

59 | C- |

55 | C- |

55 | C- |

54 | C- |

54 | C- |

52 | C- |

52 | C- |

48 | D |

46 | D |

45 | D |

43 | D |

42 | D |

42 | D |

41 | D |

40 | D |

36 | F |

35 | F |

29 | F |

Posted by Mark Thoma on Monday, March 06, 2006 at 11:39 AM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #8

[pdf file]

1. Problem 19.7

2. Problem 20.3

3. Problem 20.6 parts a and b

4. Problem 20.10 [Table
20.2 data set]

5. Consider the following simple Keynesian macroeconomic model of the U.S. economy. [Macro data set]

Y

_{t}= CO_{t}+ I_{t}+ G_{t}+ NX_{t}COt = β

_{0}+ β_{1}YD_{t}+ β_{2}CO_{t-1}+ ε_{1t}YD

_{t}= Y_{t}– T_{t}I

_{t}= β_{3}+ β_{4}Y_{t}+ β_{5}r_{t-1}+ ε_{2t}r

_{t}= β_{6}+ β_{7}Y_{t}+ β_{8}M_{t}+ ε_{3t}

where:

Y

_{t}= gross domestic product (GDP) in year t

CO_{t}= total personal consumption in year t

I_{t}= total gross private domestic investment in year t

G_{t}= government purchases of goods and services in year t

NX_{t}= net exports of goods and services (exports - imports) in year t

T_{t}= taxes in year t

r_{t}= the interest rate in year t

M_{t}= the money supply in year t

YD_{t}= disposable income in year t

(a) Which variables are endogenous?

(b) Which variables are predetermined?

(c) Using OLS, estimate equations for CO_{t} and I_{t}.

(d) Using 2SLS, estimate equations CO_{t} and I_{t}.

Posted by Mark Thoma on Sunday, March 05, 2006 at 04:10 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Begin chapter 18

**Next Time**:

Finish chapter 18, begin chapter 19

Posted by Mark Thoma on Wednesday, March 01, 2006 at 03:19 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #7

[pdf file]

1. Continuing with problem 12.34 from the last homework, correct the model for autocorrelation using the Cochrane-Orcutt procedure. [Table 12.9 data set]

2. Show that adding lagged values of ~~sales~~ inventories can overcome the
autocorrelation problem.

3. I’m making this shorter than usual so you can work on your projects.

Posted by Mark Thoma on Tuesday, February 28, 2006 at 12:52 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Midterm

**Next Time**:

Begin chapter 18

**Note**: We are a bit behind, so we will skip chapter 15.

Posted by Mark Thoma on Monday, February 27, 2006 at 01:07 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Posted by Mark Thoma on Sunday, February 26, 2006 at 01:43 PM in Review, Winter 2006 | Permalink | Comments (3) | TrackBack (0)

**Note**: Weighted least squares means to divide through by the estimated standard error.

Posted by Mark Thoma on Sunday, February 26, 2006 at 01:38 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

There seems to be confusion over the LM tests for Heteroskedasticity. They are Chi-Square tests the way we learned them, not t-tests and not F-tests. Here are the procedures (the tests are named a bit differently here):

Posted by Mark Thoma on Friday, February 24, 2006 at 01:24 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

The Cochrane-Orcutt procedure:

**One note**: In step 5 when it says to use the estimated betas obtained in step 4 in equation (9.5), this means to go back to the origanal equation (9.5) and find u_{t} = Y_{t} - b_{1} - b_{2}X_{2t} - ... - b_{k}X_{kt} where the b's are the estimated betas using the transformed ("starred") variables in step 4.

Posted by Mark Thoma on Friday, February 24, 2006 at 12:53 PM in Review, Winter 2006 | Permalink | Comments (2) | TrackBack (0)

Here's an outline of the material for the second exam:

Heteroskedasticity (Chapter 11)

- How it is defined
- How it might arise
- Effect on estimator, test statistics, etc. if OLS used
- Testing for Heteroskedasticity

- Graphs (suggestive only)
- La Grange Multiplier tests

- (i) Breusch-pagan
- (ii) Glesjer
- (iii) Park
- Goldfeld-Quandt test
- White’s test (recommended if N large enough)

- Estimation procedures

- White’s correction
- GLS
- FGLS (Feasible GLS)

- (i) Known proportional factor
- (ii) Breusch-pagan
- (iii) Glesjer
- (iv) Park

Autocorrelation (Chapter 12)

- How it is defined and how it expresses itself in a regression model

- Show how the corr(u
_{t }, u_{t-s}) changes with s- How serial correlation might arise
- Effect on estimator, test statistics, etc. if OLS used
- Testing for autocorrelation

- Graphs (suggestive only)
- Durbin-Watson Test

- Show statistic is between 0 and 4
- Advantages and disadvantages relative to Breusch-Godfrey
- Breusch-Godfrey LM test

- Estimation procedures

- GLS
- FGLS (Feasible GLS)

- Cochrane-Orcutt
- Grid search
- Note: I didn't quite finish this section and will add more after the exam.

Posted by Mark Thoma on Thursday, February 23, 2006 at 12:09 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finish chapter 12

**Next Time**:

Midterm

Posted by Mark Thoma on Wednesday, February 22, 2006 at 01:05 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Begin chapter 12

**Next Time**:

Continue chapter 12

Posted by Mark Thoma on Monday, February 20, 2006 at 01:03 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #6

[PDF
version]

12.1 parts a, b, c, d, e, f, and g.

12.3 part a.

12.26 parts a, b, and c. [Table 12.7 data set]

12.34 parts a, b(i), d, e, and f. [Table 12.9 data set]

Posted by Mark Thoma on Monday, February 20, 2006 at 09:50 AM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

What we've covered since the midterm:

Heteroskedasticity (Chapter 11)

- How it is defined
- How it might arise
- Effect on estimator, test statistics, etc. if OLS used
- Testing for Heteroskedasticity

- Graphs (suggestive only)
- La Grange Multiplier tests

- (i) Breusch-pagan
- (ii) Glesjer
- (iii) Park
- Goldfeld-Quandt test
- White’s test (recommended if N large enough)

- Estimation procedures

- White’s correction
- GLS
- FGLS (Feasible GLS)

- (i) Know proportional factor
- (ii) Breusch-pagan
- (iii) Glesjer
- (iv) Park

Posted by Mark Thoma on Thursday, February 16, 2006 at 05:06 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finish chapter 11.

**Next Time**:

Begin chapter 12 (to approx. page 475)

Posted by Mark Thoma on Wednesday, February 15, 2006 at 05:22 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Continue chapter 11 through page 415.

**Next Time**:

Finish chapter 11.

Posted by Mark Thoma on Monday, February 13, 2006 at 05:19 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Posted by Mark Thoma on Monday, February 13, 2006 at 05:09 PM in Midterms, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #5

[PDF version]

1. Suppose you want to know the determinants of new housing starts and you decide to estimate the following equation using the housing.xls data set:

log(Housing

_{i}) = α + β_{1}log(GNP_{i}) + β_{2}log(Pop_{i}) + β_{3}log(Unemp_{i}) + β_{4}log(Intrate_{i}) + ε_{i}

where, Housing_{i} = total new housing starts, GNP_{i} =
gross national product, Pop_{i} = population, Unemp_{i} = the average annual
unemployment rate, and Intrate_{i} = the average annual new mortgage
rates.

a) Is there any reason to suspect multicollinearity?

b) Run your regression. What signs of multicollinearity do you detect?

c) Describe how the multicollinearity might be overcome.

2. Problem 10-27 on page 382.

3. Using the consumption.xls data set containing quarterly data from 1974Q1 – 2002Q4 for South Korea, consider the following equation:

C

_{t}= α + βY_{t}+ u_{t}

where C = Real consumption and Y = Real income.

(a) Estimate the equation.

(b) Test this model for heteroskedasticity using the Park’s test.

(c) Test this model for heteroskedasticity using White’s test.

(d) Using feasible GLS (also called WLS) to correct for the heteroskedasticity.

4. Question 11.2 , parts a, b, and c.

Posted by Mark Thoma on Sunday, February 12, 2006 at 09:28 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Begin chapter 11

**Next Time**:

Finish chapter 11

Posted by Mark Thoma on Wednesday, February 08, 2006 at 01:41 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Andrew Samwick of Dartmouth (in 2003 and 2004, he served as the chief economist on the staff of the President's Council of Economic Advisers) and I just finished
an Econoblog for the *Wall Street Journal Online*. The issue we were
asked to address is:

WSJ Econoblog: Stitching a New Safety Net: For many years, workers could manage their medical expenses with employer-provided health insurance and Medicare and look forward to underwriting their golden years with payments from a defined-benefit pension and Social Security.

But the landscape of social insurance is shifting. Many large corporations are moving their employees from traditional pensions to riskier 401(k)s and asking workers to pay more out of their own pockets for health insurance. At the same time, Social Security and Medicare, the two venerable entitlement programs, are facing growing demographic strains as the vast baby boom generation reaches retirement age.

The Wall Street Journal Online asked economist bloggers Mark Thoma and Andrew Samwick to explore how we how arrived at this point and discuss what workers and retirees might expect in the future, as the composition of the social safety net continues to shift.

Here's the free link once again. And thanks to Andrew for an enjoyable discussion. [Originally posted here]

Posted by Mark Thoma on Tuesday, February 07, 2006 at 03:52 PM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Midterm

**Next Time**:

Chapter 11

Posted by Mark Thoma on Monday, February 06, 2006 at 02:18 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Covered empirical project steps (see below).

Announced new due date for project summary of 2/13/06. It should cover steps 1-4 (data sources at least).

Chapter 10.**Next Time**:

Midterm

**
Empirical Project Outline
**

1. Statement of theory or hypothesis

2. Specification of the mathematical (theoretical) model.

3. Specification of the econometric model

4. Obtain data

5. Estimate the econometric model

6. Test hypotheses

7. Forecasting or prediction

These are covered in chapter 1 of the text (begins on pg. 4).

Posted by Mark Thoma on Wednesday, February 01, 2006 at 08:01 PM in Empirical Project, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

The midterm will cover the following topics:

The two uses of regression models

The assumptions underlying the Guass-Markov Theorem

The Guass-Markov Theorem and what BLUE means

Reasons for an error in a regression model

How t-tests and z-tests differ (when to use each one)

Single hypothesis tests: z-tests and t-tests

Multiple hypothesis tests: F-tests and Chi-Square tests

[Should be able to do hypothesis tests with these four distributions]

Functional form: changes in units

Functional form: log-linear, semi-log, reciprocal, and log reciprocal models (see page 190 for a summary).

Use of dummy variables in regression models

Dummy variable trap

Chow tests and dummy variables

Piecewise linear regression

Model selection criteria

Types of specification errors

Consequences of omitting a relevant variable

Consequences of including an irrelevant variable

LM test for adding variables to a regression model

How to do an AIC test

Consequences of errors in measuring y

Consequences of errors in measuring the x variables.

Perfect and imperfect multicollinearity

Consequences of multicollinearity

Detection of multicollinearity

What to do about multicollinearity

Posted by Mark Thoma on Tuesday, January 31, 2006 at 07:47 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #4

[PDF FILE]

1. Using the data set on non-borrowed bank reserves and the federal funds rate (monthly from 1959:1 – 2005:12), estimate. (a) Regress the federal funds rate on non-borrowed reserves for the entire sample. How well does the model fit? (b) Split the sample at 1982 and use a dummy variable specification to allow both the slope and the intercept to change at the break point (1959:1-1981:12 and 1982:1-2005:12). (c) Estimate a piecewise linear model with a break at 1982. (d) Which of the three models do you prefer? Can you think of any omitted variables that might bias the results? Explain.

2. Consider the following equation for the annual consumption of chicken in the United States:

Y_{t} = a +
b_{1}PC_{t} +b_{2}PB_{t}
+ b_{3}YD_{t}+
u_{t}

where, Y_{t} = per capita chicken consumption (in
pounds) in year t, PC_{t} = the price of chicken (in cents per pound) in
year t, PB_{t} = the price of beef (in cents per pound) in year t, and
YD_{t} = U.S. per capita disposable income (in hundreds of dollars) in
year t. (a) Using the data set on chicken consumption, estimate the equation
using OLS and test the hypothesis that the price of beef has a positive impact
on the per capita chicken consumption at the 10% level of significance. (b) Is
the coefficient of the per capita disposable income variable statistically
significant at the 10% level? (c) Estimate the equation without YD_{t}.
Which model do you prefer, the model with YD_{t} or the model without YD_{t}.
Why? (e) Regress on Y_{t} on PC_{t}. Is the coefficient on PC_{t}
unbiased? Explain. (f) Estimate the following equation:

lnY_{t} = a +
b_{1}lnPC_{t} +
b_{2}lnPB_{t} + b_{3}lnYD_{t}
+ u_{t}

where ln is the natural logarithm. What is the interpretation
of the coefficient on lnYD_{t}? Which model do you prefer, the linear
model or the double-log model?

3. Assume that you’ve been hired by the surgeon general of the United States to study the determinants of smoking behavior and that you estimate the following cross-sectional model based on data for 1988 from all 50 states (standard errors in parentheses):

Ĉ_{i} = 100 – 9.0E_{i} + 1.0I_{i}
-0.04T_{i} – 3.0V_{i} + 1.5R_{i
}(36.) (3.5) (.75) (0.05) (.90) (0.3)

R^{2} = .57 n = 50 (states)

where C_{i} = the number of cigarettes consumed per
day per person in the ith state, E_{i} = the average years of education
for persons over 21 in the ith state, I_{i} = the average income in the
ith state (thousands of dollars), T_{i} = the tax per package of
cigarettes in the ith state (cents), V_{i} = the number of video ads
against smoking aired on the three major networks in the ith state, and R_{i}
= the number of radio ads against smoking aired on the five largest radio
networks in the ith state. (a) Develop and test (at the 5% level) appropriate
hypotheses for the coefficients of the variables in this equation. (b) Do you
appear to have any irrelevant variables? Do you appear to have any omitted
variables? Explain. (c) Assume your answer to part (b) was yes to both. Which
problem is generally more troublesome, irrelevant variables or omitted
variables? Why? (d) One of the purposes of estimating this equation was to
determine the effectiveness of antismoking advertising on television and radio.
What is your conclusion?

Posted by Mark Thoma on Tuesday, January 31, 2006 at 01:57 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

The syllabus lists February 8 as the date the empirical project summaries are due.

That date will be changed ... details in class on Wednesday.

*The Management*

Posted by Mark Thoma on Monday, January 30, 2006 at 06:07 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Chapter 13, pgs. 513-514, pg. 518, pg. 523-528, 537-538. **Next Time**:

Chapter 10.

Posted by Mark Thoma on Monday, January 30, 2006 at 02:31 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Review Topics**

So far we have covered:

The two uses of regression models

The assumptions underlying the Guass-Markov Theorem

The Guass-Markov Theorem and what BLUE means

Reasons for an error in a regression model

How t-tests and z-tests differ (when to use each one)

Single hypothesis tests: z-tests and t-tests

Multiple hypothesis tests: F-tests and Chi-Square tests

[Should be able to do hypothesis tests with these four distributions]

Functional form: changes in units

Functional form: log-linear, semi-log, reciprocal, and log reciprocal models (see page 190 for a summary).

Use of dummy variables in regression models

Dummy variable trap

Chow tests and dummy variables

Piecewise linear regression

Model selection criteria

Types of specification errors

Consequences of omitting a relevant variable

Posted by Mark Thoma on Thursday, January 26, 2006 at 03:16 PM in Review, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

I posted this on my blog today about posting class material on the web and how that impacts class attendance. I'd be curious to hear your comments (you might be interested in the comments people have left):

Posted by Mark Thoma on Thursday, January 26, 2006 at 02:38 PM in Additional Reading, Web/Tech, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Finished chapter 9, chapter 13 pgs. 506-513**Next Time**:

Chapter 13, pgs. 513-514, pg. 518, pg. 523-528, 537-538.

Begin chapter 10, pgs. 341-approx. 363.

Posted by Mark Thoma on Wednesday, January 25, 2006 at 09:24 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Here's something on economic forecasting that might be of interest:

Wall Street Journal Econoblog, The Perils of Forecasting: If economic forecasts are so often wrong, what value do they have? What's the real-word cost of a bad forecast? The Wall Street Journal Online asked bloggers James Hamilton and Kash Mansori to sort through the issues.

Posted by Mark Thoma on Wednesday, January 25, 2006 at 08:45 PM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Economics 421/521

Winter 2006

Homework #3

[PDF FILE]

1. Suppose you estimate the following dummy variable model relating the consumption expenditures (C) to disposable income (Y) in a cross-section of individuals:

C_{i}= 4000 + 5000 D_{1i}+ 9000 D_{2i}+ .92Y_{i}+ .03D_{1i}Y_{i}+ u_{i}

(325) (1500) (6000) (0.21) (.009)

R2 = 0.84, n = 35

where, Y_{i}= Disposable income in thousands D_{1i} = 1 for
women, 0 otherwise. D_{2i} = 1 if over 45 years of age, 0 otherwise.

(a) Is consumption different for women? Specifically, is there a significant difference in the slope or the intercept? Interpret your results.

(b) Is consumption different for those over 45? Explain.

2. Problem 9.22 parts a and b only. [**Data set**]

3. Problem 9.23 part a only.

Posted by Mark Thoma on Tuesday, January 24, 2006 at 06:52 AM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Chapter 9, pgs. 297-323.

** ** Skip pgs. 310 - top of 317 on Interaction Effects and Seasonal Analysis, and skip pgs. 320 - top of page 323.

**Next Time**:

Chapter 13, pgs. 506-514, pg. 518, pg. 523-528, 537-538.

Posted by Mark Thoma on Monday, January 23, 2006 at 08:00 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

In case this is of interest, this is from John Whitehead of the Environmental Economics blog. For some reason, they gave me keys to the door and I post stuff there once in awhile, though it's mostly just links to articles:

John Whitehead, Environmental Economics, How is the U.S. really doing on the 2006 Environmental Performance Index?: The U.S. ranks 28th in the world in achieving certain environmental goals according to the Environmental Performance Index produced by Yale and Columbia. Yesterday Mark Thoma linked to a NYTimes article which could be interpreted as implying that the U.S. should be doing better.

The analysis conducted in the reports puts the U.S. under the line. In other words, the U.S. is doing worse than expected given its income level. I wondered if this were true. Using a slightly more complicated analysis it appears that the U.S. is doing about as well as it should be doing.

First, the economic theory of achieving environmental goals considers the benefits and costs. One factor that affects the benefits of achieving environmental goals is income. Many economists believe, and have found in studies of individual and country-level demand for environmental quality, that the demand for environmental quality (i.e., benefits) increases with income. The costs of achieving environmental goals depends mostly on technology such as knowledge of the capital equipment needed to achieve environmental goals and policy instruments (e.g., marketable permits, command and control). Technology should be constant across countries.

Since one of the primary determinants of the demand for environmental quality varies across countries it is possible to estimate whether income is a determinant of how effective countries are at achieving environmental goals, measured by the EPI.

Fortunately, the EPI report includes an Excel file with all of the data necessary to estimate a simple model. I took the data and used regression analysis to estimate the effect of a country's per capita GDP (i.e., average income) on the EPI while holding constant the region of the world (i.e., Americas, European Union, etc). The EPI did not include these control variables. Here are the results:

EPI = 60.5 + .00215*(GDP/pop) - .00004*((GDP2/pop)/1000)

GDP/pop is GDP per capita and (GDP2/pop)/1000 is GDP squared divided by 1000. The regional effects are not shown but they indicate that the Americas has a higher EPI than the rest of the world. The independent variables (GDP, etc) explain about 75% of the variation in EPI, which is pretty good.

This model tells us that EPI increases with GDP but at a decreasing rate. In other words, the EPI is subject to diminishing returns. An country can improve its environmental performance with increasing gains in per capita GDP but these gains are smaller and smaller as GDP per capita increases.

One way of thinking about this model is as a production function. The input is income and the output is EPI. Plug in a country's GDP and region and the model will tell you how that country should be doing. Countries that have a positive difference between their EPI and the predicted EPI are doing better than expected. Countries that have a negative difference are doing worse than expected.

The U.S. ranks 28th in the world in EPI. The U.S. ranks 29th in the world. The U.S. EPI is 78.5 and the predicted EPI is 78.4514. The multiple regression model predicts the U.S. performance almost perfectly. The U.S. is achieving environmental goals about as well as expected given its GDP per capita.

Which countries are doing better than expected? They are all relatively poor:

Country EPI Predicted EPI 1 Gabon 73.3 57.8956 2 Lebanon 76.7 61.7754 3 Malaysia 83.3 69.4951 4 Zimbabwe 63 50.621 5 Ghana 63.1 51.4095 6 Uganda 60.8 49.8188 7 Nepal 60.2 49.3573 8 Tanzania 59 48.1737 9 Sri Lanka 64.6 54.148 10 Benin 58.4 49.2083 Which countries are doing worse than expected? Again, they are all relatively poor:

Country EPI Predicted EPI 124 Romania 56.9 66.6398 125 Turkmenistan 52.3 63.4779 126 Ethiopia 36.7 48.3823 127 Angola 39.3 51.0169 128 Mexico 64.8 77.2201 129 Mali 33.9 48.5901 130 Haiti 48.9 63.6234 131 Mauritania 32 50.4217 132 Chad 30.5 50.0206 133 Niger 25.7 48.5901 Of course, this isn't the whole story. There are a host of other variables that could potentially help explain the variation in the EPI. For example, increases in population density could limit the ability of countries to achieve environmental goals. GDP could be high (or rising rapidly) because of sustainable factors (e.g., high labor productivity) or unsustainable factors (e.g., exploitation of natural resources). Maybe I'll think about this next week.

Also, I really should read the damn report and appendices before I naively plug the numbers into the computer and make them scream! Maybe next week.

And, feel free to make unreasonable demands for additional analysis!

One more thing: why didn't the authors of the EPI report do the multiple regression analysis?

Posted by Mark Thoma on Monday, January 23, 2006 at 10:35 AM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

[For those who might be interested ... originally posted here.]

Milton Friedman's "plucking model" is an interesting alternative to the natural rate of output view of the world. The typical view of business cycles is one where the economy varies around a trend value (the trend can vary over time also). Milton Friedman has a different story. In Friedman's model, output moves along a ceiling value, the full employment value, and is occasionally plucked downward through a negative demand shock. Quoting from the article below:

In 1964, Milton Friedman first suggested his “plucking model” (reprinted in 1969; revisited in 1993) as an asymmetric alternative to the self-generating, symmetric cyclical process often used to explain contractions and subsequent revivals. Friedman describes the plucking model of output as a string attached to a tilted, irregular board. When the string follows along the board it is at the ceiling of maximum feasible output, but the string is occasionally plucked down by a cyclical contraction.

Friedman found evidence for the Plucking Model of aggregate fluctuations in a
1993
paper in *Economic Inquiry*.
One reason I've always liked this paper is that Friedman first wrote it
in 1964. He then waited for almost twenty years for new data to arrive
and retested his model using only the new data. In macroeconomics, we
often encounter a problem in testing theoretical models. We know what
the data look like and what facts need to be explained by our models.
Is it sensible to build a model to fit the data and then use that data
to test it to see if it fits? Of course the model will fit the data, it
was built to do so. Friedman avoided that problem since he had no way
of knowing if the next twenty years of data would fit the model or not.
It did. I was at an SF Fed Conference when he gave the 1993 paper and
it was a fun and convincing presentation. Here's a recent paper on this
topic that supports the plucking framework (thanks Paul):

Asymmetry in the Business Cycle: New Support for Friedman's Plucking Model, Tara M. Sinclair, George Washington University, December 16, 2005, SSRN:

AbstractThis paper presents an asymmetric correlated unobserved components model of US GDP. The asymmetry is captured using a version of Friedman's plucking model that suggests that output may be occasionally "plucked" away from a ceiling of maximum feasible output by temporary asymmetric shocks. The estimates suggest that US GDP can be usefully decomposed into a permanent component, a symmetric transitory component, and an additional occasional asymmetric transitory shock. The innovations to the permanent component and the symmetric transitory component are found to be significantly negatively correlated, but the occasional asymmetric transitory shock appears to be uncorrelated with the permanent and symmetric transitory innovations. These results are robust to including a structural break to capture the productivity slowdown of 1973 and to changes in the time frame under analysis. The results suggest that both permanent movements and occasional exogenous asymmetric transitory shocks are important for explaining post-war recessions in the US.

Let me try, within my limited artistic ability, to illustrate further. If you haven't seen a plucking model, here's a graph to illustrate (see Piger and Morley and Kim and Nelson for evidence supporting the plucking model and figures illustrating the plucking and natural rate characterizations of the data). The "plucks" are the deviations of the red line from blue line representing the ceiling/trend:

Notice that the size of the downturn from the ceiling from a→b (due to the "pluck") is predictive of the size of the upturn from b→c that follows taking account of the slope of the trend. I didn't show it, but in this model the size of the boom, the movement from b→c, does not predict the size of the subsequent contraction. This is the evidence that Friedman originally used to support the plucking model. In a natural rate model, there is no reason to expect such a correlation. Here's an example natural rate model:

Here, the size of the downturn a→b does not predict the size of the subsequent boom b→c. Friedman found the size of a→b predicts b→c supporting the plucking model over the natural rate model.

Posted by Mark Thoma on Saturday, January 21, 2006 at 02:38 PM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Today:**

Chapter 6, pgs. 169-193.

**Next Time**:

Chapter 9, pgs. 297-323.

**Update**: Skip pgs. 310 - top of 317 on Interaction Effects and Seasonal Analysis, and skip pgs. 320 - top of page 323.

Posted by Mark Thoma on Wednesday, January 18, 2006 at 02:34 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

This is something I posted on my blog tonight. Fun with regressions:

Here is a graph of household debt service payments as a percent of disposable personal income from 1980Q1-2005Q3, the latest period for which consistent data exist (total financial obligations show a similar pattern):

I wanted to see how the debt load related to economic downturns, so the
shaded areas are NBER dated recessions. It's hard to see a strong
association between the recessions and changes in debt. The next graph
plots the same series along with the *negative* of the unemployment rate. The
association appears much stronger:

To see if regression confirms the association, I ran OLS on

%Debt_{t} = β_{0} + β_{1}%Debt_{t-1} + β_{2}%Debt_{t-2} + β_{3}UN_{t} + e_{t}

and it does (debt data described here). The coefficient on UN is significant at the 5% level:

` Linear Regression - Estimation by Least Squares`

Dep Var PERCENTDEBT

Quarterly Data: 1980:03 To 2005:03

Usable Observations 101

Degrees of Freedom 97

Durbin-Watson Statistic 2.014351

Variable Coeff T-Stat Signif

***************************************************

β_{0} 0.468304856 1.49629 0.13782438

β_{1} 1.125298228 11.06370 0.00000000

β_{2} -0.150035410 -1.48249 0.14145135

β_{3} -0.024721984 -2.01484 0.04668936

As you interpret the second graph, remember that it's the negative of the unemployment rate. Thus, when the blue line is rising, unemployment is falling. In the regression results the actual unemployment rate, not the negative of it, is used. This is a fairly broad brush and it may hide detail such as differences by income class, and omitted variables are a concern, but in general these data and this model suggest unemployment and the debt percentage are negatively related.

Posted by Mark Thoma on Wednesday, January 18, 2006 at 12:22 AM in Additional Reading, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

There are two copies of the book on reserve under Economics 420.

I'm still trying to find a copy to place under Ec 421.

Posted by Mark Thoma on Tuesday, January 17, 2006 at 03:12 PM in Winter 2006 | Permalink | Comments (0) | TrackBack (0)

**Update**:

On problem 5.16 in Homework 1, do NOT do the regression called for in part b. It says:

b. Do the regression results support your expectation? What formal test do you use to test your hypothesis?

Change this to:

b. What formal test do you use to test your hypothesis?

Thank you.

Posted by Mark Thoma on Tuesday, January 17, 2006 at 12:38 PM in Homework, Winter 2006 | Permalink | Comments (0) | TrackBack (0)

Mark Thoma

Department of Economics

University of Oregon 1285

Eugene, OR 97403-1285

Mark Thoma's Web Page

Email Mark Thoma

Mark Thoma's Blog

Office: PLC 471

Office Hours: T/Th 3:30-4:30

Gulcan Cil

Office: PLC 431

Email: gcil@uoregon.edu

Office Hours: M 1:00-3:00 and by appointment

Colin Corbett

Office: PLC 504

Email: corbett@uoregon.edu

Office Hours: T/Th 11-12 and by appointment