Suppose a study is being designed to measure the effect, on systolic blood pressure, of lowering sodium in the diet. From a pilot study it is observed that the standard deviation of systolic blood pressure in a community with a high sodium diet is 12 mmHg while that in a group with a low sodium diet is 10.3 mmHg. If a=0.05 and b=0.10, how large a sample from each community should be selected if we want to be able to detect a 2 mmHg difference in blood pressure between the two communities?
We have:
Power, 1−β = 0.90
Type I error rate, α = 0.05
Group 'A' mean, μA - Group 'B' mean, μB = 2
Group 'A' Standard Deviation, σA = 12
Group 'B' Standard Deviation, σB = 10.3
Sampling Ratio, κ=nA/nB = 1
The formula to be used is:
Putting all the values in the formula, we get:
Sample Size, nA = 535
535 people from each community should be selected if we want to be able to detect a 2 mmHg difference in blood pressure between the two communities.
Get Answers For Free
Most questions answered within 1 hours.