Question

Describe sampling distribution of the ols estimator

Describe sampling distribution of the ols estimator

Homework Answers

Answer #1

Let us consider the simple linear regression model where we describe a relation between a continous variable y and a variable x of the type y=α+βx+ϵ.

This implies

E[y|x]=α+βxE[y|x]=α+βx, under the hypothesis that E[ϵ]=0E[ϵ]=0.

We want to estimate the unknown parameters α and β using a sample of n observations

Let a and b be the estimators of α and β, so that y=a+bx.

One way to estimate a and b is by minimizing ∑ni=1(yi−a−bxi)2∑i=1n(yi−a−bxi)2.

Here, we minimize the sum of the squared distances between the line y=a+bx and the points (xi,yi)(xi,yi) with respect to a and b. Such a minimization is called OLS (Ordinary Least Squares).

The estimators a and b are then b=cov(x,y)/v(x),

a=y¯+bx¯,b=cov(x,y)/v(x),a=y¯+bx¯,

where cov(x,y)cov(x,y) is the sampling covariance between xixis and yiyis, v(x)v(x) is the sampling variance of the xixis, x¯x¯ and y¯y¯ are the sample mean of the xi and yi respectively. Assuming the denominator in both cov(x,y)cov(x,y) and v(x)v(x) is n.

The estimators a and b depend on the sample observations. To make inference on the unknown parameters α and β one should know the sampling distribution of the estimators a and b, indeed actually we observe only one sample. Under some regularity conditions a and b are the Best Linear Unbiased Estimators (BLUE) for α and β.

Their sampling distribution is

b∼N(β,σ2(nv(x))−1)b∼N(β,σ2(nv(x))−1)

a∼N(α,σ2(v(x)+x¯)(nv(x))−1),a∼N(α,σ2(v(x)+x¯)(nv(x))−1),

where σ2 is the variance of the error term ϵϵ, i.e. E[(ϵ−E[ϵ])2]=E[ϵ2]=σ2E[(ϵ−E[ϵ])2]=E[ϵ2]=σ2.

The value σ2 is unknown and should be estimated. One possible estimator is s2=(n−2)−1∑ni=1(yi−y^i)2s2=(n−2)−1∑i=1n(yi−y^i)2, where yi=a+bxiyi.

Finally, it is possible to sow that the sampling distribution of a and b given the estimator s2 of σ2 is

b−βs(nv(x))−1√∼tn−2b−βs(nv(x))−1∼tn−2

a−αs(v(x)+x¯)(nv(x))−1√∼tn−2.a−αs(v(x)+x¯)(nv(x))−1∼tn−2.

The regularity conditions are

  • ϵ∼(iid)N(0,σ2)ϵ∼(iid)N(0,σ2)
  • x is not stochastic (not a random variable)
  • x has variability

Take attention to the first regularity condition which implies central independent and homoskedastic errors. If one of this is violated then the sampling distribution of the OLS will change.

Know the answer?
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for?
Ask your own homework help question
Similar Questions
When the conditions for linear regression are met, the OLS estimator is the BLUE estimator. Discuss...
When the conditions for linear regression are met, the OLS estimator is the BLUE estimator. Discuss this argument.
Derive the formula for the variance of the OLS estimator of β0
Derive the formula for the variance of the OLS estimator of β0
a. If the OLS estimator is unbiased for the true population parameter, is the OLS estimate...
a. If the OLS estimator is unbiased for the true population parameter, is the OLS estimate necessarily equal to the population parameter? Explain your answer in detail. b. Suppose that the true population regression (data generating process) is given by Y i = B 0 + B 1 X i +u i . Further suppose that the population covariance between X i and u i is equal to some positive value A , rather than zero: COV(X i ,u i...
1. The Central Limit Theorem A. States that the OLS estimator is BLUE B. states that...
1. The Central Limit Theorem A. States that the OLS estimator is BLUE B. states that the mean of the sampling distribution of the mean is equal to the population mean C. none of these D. states that the mean of the sampling distribution of the mean is equal to the population standard deviation divided by the square root of the sample size 2. Consider the regression equation Ci= β0+β1 Yi+ ui where C is consumption and Y is disposable...
For the model ?? = ?1 + ?? , the OLS estimator of ?1 is ?̂1...
For the model ?? = ?1 + ?? , the OLS estimator of ?1 is ?̂1 = ?̅. Demonstrate that ?̂1 may be decomposed into the true value plus a linear combination of the disturbance terms in the sample. Hence demonstrate that it is an unbiased estimator of ?1.
How does heteroskedasticity affect the theoretical properties of the OLS estimator?
How does heteroskedasticity affect the theoretical properties of the OLS estimator?
If the errors in the CLR model are not normally distributed, although the OLS estimator is...
If the errors in the CLR model are not normally distributed, although the OLS estimator is no longer BLUE, it is still unbiased. In the CLR model, βOLS is biased if explanatory variables are endogenous. The value of R2 in a multiple regression cannot be high if all the estimates of the regression coefficients are shown to be insignificantly different from zero based on individual t tests. Suppose the CNLR applies to a simple linear regression y = β1 +...
OLS: y=10+1.3X1+2.1X2+0.2X3+ui if OLS estimator is problematic because X1 is endogenous, what assumption of linear regression...
OLS: y=10+1.3X1+2.1X2+0.2X3+ui if OLS estimator is problematic because X1 is endogenous, what assumption of linear regression would be invalid and what's wrong with using OLS
β_hat is the OLS estimator which is the vector form of all β in the regression...
β_hat is the OLS estimator which is the vector form of all β in the regression Assume Cov(X,U) = 0. Show that (e^xβ_hat)−1 is a biased estimator for (e^xβ) -1 . Show that e^x̂β −1 is a consistent estimator for e^xβ−1.
If the zero conditional mean assumption does not hold but homoskedasticity is satisfied, the OLS estimator...
If the zero conditional mean assumption does not hold but homoskedasticity is satisfied, the OLS estimator will still be BLUE. A)True B) False
ADVERTISEMENT
Need Online Homework Help?

Get Answers For Free
Most questions answered within 1 hours.

Ask a Question
ADVERTISEMENT