A company conducts surveys each year about the use of various media, such as television and video viewing. A representative sample of adult Americans was surveyed in 2010, and the mean number of minutes per week spent watching "time-shifted" television (watching television shows that were recorded and played back at a later time) for the people in this sample was 572 minutes. An independently selected representative sample of adults was surveyed in 2011, and the mean time spent watching time-shifted television per week for this sample was 648 minutes. Suppose that the sample size in each year was 1000 and that the sample standard deviations were 60 minutes for the 2010 sample and 80 minutes for the 2011 sample. Estimate the difference in the mean time spent watching time-shifted television in 2010 and the mean time spent in 2011 using a 99% confidence interval. (Use μ2010 − μ2011. Round your answers to two decimal places.)
x1 = | 572.00 | x2 = | 648.00 |
n1 = | 1000 | n2 = | 1000 |
σ1 = | 60.00 | σ2 = | 80.00 |
std error σx1-x2=√(σ21/n1+σ22/n2) = | 3.162 |
Point estimate of differnce '=x1-x2 = | -76.000 | ||
for 99 % CI value of z= | 2.576 | ||
margin of error E=z*std error = | 8.145 | ||
lower bound=(x1-x2)-E = | -84.145 | ||
Upper bound=(x1-x2)+E = | -67.855 | ||
from above 99% confidence interval for population mean =(-84.15 <μ1-μ2 < -67.85) |
Get Answers For Free
Most questions answered within 1 hours.