So, I am trying to perform a goodness of fit test on a set of data. The data consists of one thousand samples with values that range between 3.9 to 15.5. In the problem it is stated that this data could have either a gamma or normal distribution and I am meant to figure out which of these distributions it is closer to due to the results of two goodness of fit tests (one assuming gamma distribution and the other assuming normal distribution).
The instructions state that, for the step I am on, I should assume that the data has a normal distribution. My main issue in solving this problem is that the formula used for the goodness of fit test requires an observed value and an expected value. If the thousand data points provided represent observed values, how would I go figuring out the expected values?
OTHER INFORMATION: I have already made a histogram of the data, and found the mean and standard deviation. If I can use this information to help me figure out the expected values, I would love to know how.
You need to make a frequency distribution for the data. If one of the class intervals in your frequency distribution is (a,b), the observed frequency is the number of observations lying between the values a and b out of 1000 generated observations. For that class the expected frequency is
1000*P(a<X<b), where the distribution of X is either gamma or normal, as you set initially. For any other class, the expected frequencies can be generated similarly.
Get Answers For Free
Most questions answered within 1 hours.