Most video games have several difficulty settings, from "Easy" to "Hard" (many of these have very colorful names). A video game designer wants to determine how to structure the difficulty levels for a new game so that the average time it takes the typical player to play through "Hard" mode is longer than the time to play through "Easy" mode. A sample (Group 1) of 19 typical players took an average of 6.6 hours to complete the game on "Hard", with a standard deviation of 1.7. Another sample (Group 2) of 18 typical players took an average of 2.9 hours to complete the game on "Easy", with a standard deviation of 1.2. Calculate the test statistic to test the hypothesis described above, assuming the two population standard deviations are not equal (Case 2). Take all calculations toward the answer to three decimal places.
Data given is:
Sample sizes, n1 = 19, n2 = 18
Sample means, m1 = 6.6, m2 = 2.9
Sample SD, S1 = 1.7, S2 = 1.2
The hypotheses are:
H0: 1 = 2
Ha: 1 > 2
Calculating standard error, SE = ((S1^2)/n1 + (S2^2)/n2)^0.5 = ((1.7^2)/19 + (1.2^2)/18)^0.5 = 0.482
Calculating test statistic, t = (m1-m2)/SE = (6.6-2.9)/0.482 = 7.67
Degrees of freedom, df = n1 + n2 - 2 = 35
The corresponding p-value for this right tailed t-test is:
p < 0.00001
So this means that we have to reject the null hypothesis.
Get Answers For Free
Most questions answered within 1 hours.