How might you go about computing z scores for a set of raw scores?
Simply put, a z-score is the number of standard deviations from the mean a data point is. But more technically it’s a measure of how many standard deviations below or above the population mean a raw score is. A z-score is also known as a standard score and it can be placed on a normal distribution curve. Z-scores range from -3 standard deviations (which would fall to the far left of the normal distribution curve) up to +3 standard deviations (which would fall to the far right of the normal distribution curve). In order to use a z-score, you need to know the mean μ and also the population standard deviation σ.
To calculate a z-score, subtract the mean from the raw score and divide that answer by the standard deviation. i.e. Z - Score = (raw score - mean) / SD
For example, raw score =15, mean = 10, standard deviation = 4.
Therefore 15 minus 10 equals 5. 5 divided by 4 equals 1.25. Thus the z-score is 1.25.
Get Answers For Free
Most questions answered within 1 hours.