A group of students estimated the length of one minute without reference to a watch or clock, and the times (seconds) are listed below. Use a 0.10 significance level to test the claim that these times are from a population with a mean equal to 60 seconds. Does it appear that students are reasonably good at estimating one minute?
75 |
93 |
47 |
77 |
54 |
31 |
72 |
72 |
76 |
58 |
71 |
77 |
105 |
101 |
71 |
Identify the test statistic.
t =
(Round to three decimal places as needed.)
The P-value is __
(Round to four decimal places as needed.)
From the given sample,
We have given Sample size = 15
Sample mean xbar = 72 sec
Sample standard deviation s= 19.41
Here population standard deviation is not known and sample size is less than 30, so we will use one sample t test.
The hypotheses are
H0: μ= 60 sec
Ha: μ ≠ 60 sec
This is two-sided test.
Test statistics value (t) = (xbar – μ) / (s/√n) = (72 - 60)/(19.41/sqrt(15)) = 2.394
Test statistics value (t) = 2.394
P value = 0.0312
Decision: Reject H0 because the P- value is less than the significant level 0.10 so reject null hypothesis at 0.10 level of significant.
Conclusion: At 10% level of significance, there is sufficient evidence to claim that mean is different from 60 seconds. So, we can conclude that students are not reasonably good at estimating one minute.
Get Answers For Free
Most questions answered within 1 hours.