Question

In the procedure of computing the Variance of a distribution, we squared the deviations. What is...

In the procedure of computing the Variance of a distribution, we squared the deviations. What is the need for this squaring step?

Homework Answers

Answer #1

Squaring the difference has following advantages:-

1.) Squaring makes each term positive, so that the values above mean do not cancel the values below the mean.

2.) Squaring adds more weighting to the larger differences and it is appropriate since the points further away from mean may have more significance.

3.) Because the differences are squared the units of variance are not same as units of the data. Therefore, the standard deviation is reported as the square root of variance and the units then corresponds to those of the data set.

Know the answer?
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for?
Ask your own homework help question
Similar Questions
When we calculated the Variance (on the way to Standard Deviation) we computed the difference between...
When we calculated the Variance (on the way to Standard Deviation) we computed the difference between what we saw in each case (or in each observation), and the Expected value for all of the observations. Then, we squared those differences before weighting this element by the probability of actually seeing that observation or scenario. So, what is the effect (or are the effects) of squaring that value?
what is the formula for computing the excess return, mean, variance, skewness and kurtosis.
what is the formula for computing the excess return, mean, variance, skewness and kurtosis.
True or False: If we know the mean and the variance of the normal distribution we...
True or False: If we know the mean and the variance of the normal distribution we know its shape.
What are two ways that X squared distribution is different from T distribution?
What are two ways that X squared distribution is different from T distribution?
Part I: Chi-squared Distribution & the Central Limit Theorem The idea here is that you will...
Part I: Chi-squared Distribution & the Central Limit Theorem The idea here is that you will explore a type of rv on your own (we only briefly mentioned this one in class) a) Imagine sampling 50 values from χ2(8) a chi-squared distribution with 8 degrees of freedom. According to the CLT (central limit theorem), what should be the expected value (mean) of this sample? You should not need to do any coding to answer this. This is worth 1/2 EC...
There is a rectangular distribution. What proportion of the distribution is within two standard deviations from...
There is a rectangular distribution. What proportion of the distribution is within two standard deviations from the mean?
There is a rectangular distribution. What proportion of the distribution is within two standard deviations from...
There is a rectangular distribution. What proportion of the distribution is within two standard deviations from the mean?
Suppose we randomly select 100 bills charged for this procedure. What is the sampling distribution of...
Suppose we randomly select 100 bills charged for this procedure. What is the sampling distribution of the mean cost for samples of size 100? By CLT, the sample means are Normal in shape. Determine the mean of the means and the standard deviation of the means. Give your answer as a list composed of the mean of the sample means,standard deviation of the sample means with commas in between and no extra spaces. For example, a valid answer might look...
suppose y has a normal distribution with mean = 0 and variance = 1/theta. assume the...
suppose y has a normal distribution with mean = 0 and variance = 1/theta. assume the prior distribution for theta is a gamma distribution with parameters r and lambda. a) what is the posterior distribution for theta? b) find the squared error loss Bayes estimate for theta
Conditional variance In the last example, we saw that the conditional distribution of X, which was...
Conditional variance In the last example, we saw that the conditional distribution of X, which was a uniform over a smaller range (and in some sense, less uncertain), had a smaller variance, i.e., Var(X∣A)≤Var(X). Here is an example where this is not true. Let Y be uniform on {0,1,2} and let B be the event that Y belongs to {0,2}. a) What is the variance of Y? Var(Y)= b) What is the conditional variance Var(Y∣B)? Var(Y∣B)=