Question

The LM (Lagrange Multiplier) test (or sometimes referred to as the Breusch- Pagan Test) generates a...

The LM (Lagrange Multiplier) test (or sometimes referred to as the Breusch-

Pagan Test) generates a test statistic N * R2 . Where is the R2 in the test sta- tistic measured?

a)The original econometric model when estimated using the White standard errors correction method.

b) The average from all the auxiliary regressions estimated with each ex- planatory as a function of the other explanatory variables.

c)The original econometric model before any test of heteroskedasticity has been performed.

d) The regression of residuals as a function of the explanatory variables gen- erating the heteroskedasticity.

Homework Answers

Answer #1

a) Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to hold between the various economic quantities pertaining to a particular economic phenomenon.

Econometric Model has the following steps:

  1. You propose an economic relation to test.
  2. You develop a hypothesis to test this relationship.
  3. You specify data type and variables type for testing this hypothesis.
  4. You apply a mathematical method to the data to test the hypothesis.

Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to hold between the various economic quantities pertaining to a particular economic phenomenon. An econometric model can be derived from a deterministic economic model by allowing for uncertainty, or from an economic model which itself is stochastic. However, it is also possible to use econometric models that are not tied to any specific economic theory.

A simple example of an econometric model is one that assumes that monthly spending by consumers is linearly dependent on consumers' income in the previous month. Then the model will consist of the equation

{\displaystyle C_{t}=a+bY_{t-1}+e_{t},}ct=a+byt-1+et

where Ct is consumer spending in month t, Yt-1 is income during the previous month, and et is an error term measuring the extent to which the model cannot fully explain consumption. Then one objective of the econometrician is to obtain estimates of the parameters a and b; these estimated parameter values, when used in the model's equation, enable predictions for future values of consumption to be made contingent on the prior month's income.

These are also known as Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors), to recognize the contributions of Friedhelm Eicker, Peter J. ... Heteroscedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroscedastic residuals.

In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances ui have the same variance across all observation points. When this is not the case, the errors are said to be heteroscedastic, or to have heteroscedasticity, and this behaviour will be reflected in the residuals {\displaystyle {\widehat {u}}_{i}} estimated from a fitted model. Heteroscedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroscedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation.

b) Auxiliary Regression: A regression used to compute a test statistic-such as the test statistics for heteroskedasticity and serial correlation or any other regression that does not estimate the model of primary interest.At first, we explain a general regression framework with the auxiliary variables. Let Y be a response variable and X be a p-dimensional explanatory variable. The true conditional density function of Y given X is denoted by q(y|x). Our aim is to construct a good regression model to estimate q(y|x), and then a candidate model py(y|x; α) is considered where α ∈ A is unknown parameters. Here, we have the auxiliary variables A that are a q-dimensional random variable. A joint model of (Y, A) given X is assumed to be p(y, a|x; θ) where θ ∈ Θ is unknown parameters. More directly, we consider the regression model of Y given (A, X) as py(y|a, x; θ) when we regard A as covariates. These models p(y, a|x; θ) and py(y|a, x; θ) may be able to explain the event of interest and to predict the future behavior of Y more precisely than py(y|x; α). However, since we allow the auxiliary variables not to be collected in the future observation, we do not apply the regression model py(y|a, x; θ) to the future observation. Thus, we consider an alternative model as follows: p(y|x; θ) ≡ ∫ p(y, a|x; θ)da = ∫ py(y|x, a; θ)p(a|x; θ)da. Hence, if we specify the regression model of A given X, p(a|x; θ), then we can define the marginal model p(y|x; θ). The unknown parameters θ are estimated from the joint model p(y, a|x; θ) in order to utilize the information of the auxiliary variables A. Note that when the regression model py(y|x; α) is correctly specified, i.e., there exists α0 ∈ A such that py(y|x; α0) = q(y|x), it may not need to consider the joint model p(y, a|x; θ) (or its marginal model p(y|x; θ)) because the maximum likelihood estimator (MLE) of α will converge to the true value α0.

c) Heteroscedasticity is a hard word to pronounce, but it doesn't need to be a difficult concept to understand. Put simply, heteroscedasticity (also spelled heteroskedasticity) refers to the circumstance in which the variability of a variable is unequal across the range of values of a second variable that predicts it.In simple terms, heteroscedasticity is any set of data that isn't homoscedastic. More technically, it refers to data with unequal variability (scatter) across a set of second, predictor variables. If your graph has a rough cone shape (like the one above), you're probably dealing with heteroscedasticity.

d) Heteroscedasticity means unequal scatter. In regression analysis, we talk about heteroscedasticity in the context of the residuals or error term. Specifically, heteroscedasticity is a systematic change in the spread of the residuals over the range of measured values. Heteroscedasticity is a problem because ordinary least squares (OLS) regression assumes that all residuals are drawn from a population that has a constant variance (homoscedasticity).To satisfy the regression assumptions and be able to trust the results, the residuals should have a constant variance. In this blog post, I show you how to identify heteroscedasticity, explain what produces it, the problems it causes, and work through an example to show you several solutions.

Heteroscedasticity, also spelled heteroskedasticity, occurs more often in datasets that have a large range between the largest and smallest observed values. While there are numerous reasons why heteroscedasticity can exist, a common explanation is that the error variance changes proportionally with a factor. This factor might be a variable in the model.

n some cases, the variance increases proportionally with this factor but remains constant as a percentage. For instance, a 10% change in a number such as 100 is much smaller than a 10% change in a large number such as 100,000. In this scenario, you expect to see larger residuals associated with higher values. That’s why you need to be careful when working with wide ranges of values.

Know the answer?
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for?
Ask your own homework help question
Similar Questions