Question

# (1) A Chi-squared test is typically used to test for any of the following except which...

(1) A Chi-squared test is typically used to test for any of the following except which of the following?

(A) If a mathematical model accurately predicts our observed frequencies of data values.

(B) If a mathematical model accurately predicts the total number of observed data values.

(C) If a mathematical model accurately predicts the pattern of our observed data values.

(D) Whether two factors present in a population are independent of one another.

(E) Whether a series of populations experience the same frequency of some trait.

(2) The hypergeometric distribution is sometimes used to predict the number of successes in an experiment that involves sampling without replacement. Imagine we use this distribution to predict numbers of successes and compare it to our observations using a chi-squared test with 8 degrees of freedom and we obtain a test statistic value of 18, which of the following best describes our conclusion?

(A) The hypergeometric distribution does not accurately predict the number of successes ( p > 0.05 ).

(B) The hypergeometric distribution does not accurately predict the number of successes ( p < 0.05 ).

(C) The hypergeometric distribution accurately predicts the number of successes ( p > 0.05 ).

(D) The hypergeometric distribution accurately predicts the number of successes ( p < 0.05 ).

(E) The hypergeometric distribution accurately predicts the number of successes ( p < 0.025 ).

(3) After conducting a one-way ANOVA test, one method to determine which groups differ in their means is to use Bonferroni-corrected t tests. When doing this for 5 groups, which of the following is closest to the appropriate p value to use so that the overall probability of type I error is only 0.05.

(A) 0.0005

(B) 0.001

(C) 0.005

(D) 0.01

(E) 0.05

(4) Which of the following is the best description of the one factor ANOVA procedure?

(A) We compare the observed variance of the group means and compare that value to that expected from sampling error. If it is higher than expected, then the means of the populations differ.

(B) We compare the mean observed variance within the groups to the value expected from sampling error. If it is higher, then the means of the populations differ.

(C) We compare the total observed variance to the range of the group means. If it is higher, then the means of the populations differ.

(D) We compare the largest within-group variance to the smallest. If it is higher, then the means of the populations differ.

(E) We compare the sample means. The one closest to the overall mean is significant.

(5) Which of the following would cause us to consider transforming our data prior to statistical testing?

(A) The means of the groups we are comparing in a one-way ANOVA differ.

(B) The means of the groups we are comparing in a two-way ANOVA differ.

(C) The variance of the residuals of our data changes as we move from one end of our X values to the other.

(D) The data values in an XY plot appear linear instead of curved.

(E) The variances of the groups we are comparing in a one-way ANOVA do not differ.

(6) A full statistical analysis of regression data would typically include all of the following except which of the following?

(A) A calculation of the predicted Y values using the observed X values.

(B) A calculation of the standard error of the slope.

(C) A plot of residuals to test linearity.

(D) A calculation of the coefficient of determination to determine how much of variance of Y values is explained by the variance of X values.

(E) An F test of F=MSerror/MSregression

(7) Why doesn't a correlation between one variable and another demonstrate that one is responsible for the other?

(A) Other unknown factors may cause the relationship between the variables.

(B) Post hoc ergo propter hoc is a well known logical fallacy.

(C) Linear relationships cannot demonstrate causation.

(D) The variables may need to be transformed if their variances are different.

(E) Regression equations cannot be used to make predictions outside the region for which data has been observed.

1: (E) option is correct because in this case population experiance the same frequesncy of some trait, it means that they are dependent to each other.

2: I think option (D) is correct. Because when p< .05, it means we reject our null hypothesis that means there is some correlation between variables.And in hypergeomtric distribution there is dependent variables. so when we reject hypothesis then it is correct.

3: Option (d) is correct because bonferroni corrected/ adjusted p = original alpha value/ number of analysis of the dependent variable. that means .05/5=.01.

4: option(b) is correct because technique is not dssigned to test the equality of several population variance . its objective is to test the equality of several population means .

#### Earn Coins

Coins can be redeemed for fabulous gifts.