Frequentism is the philosophy that probabilities are statistical in the sense that they give the limiting frequency ratios of outcomes as the number of trials is large enough. For tiny probabilities like exponentially small probabilities, would this require exponentially many trials?
I know, I know, you are thinking if the probability of an outcome is exponentially small, then we won't expect it to happen in any trial if the number of trials is some reasonable practical number. This overlooks the situation where say the possible outcomes are some long strings of characters taken from some alphabet in the computer science sense, and the probability distribution is such that the probability for any particular string is exponentially small. In such a case, if someone were to specify some specific string in advance, and the number of trials is some realistic number, no one would expect to find that string in any trial outcome. On the other hand, if you were to look at the trial outcomes, they would all correspond to strings which if chosen a priori, would be considered practically improbable. In practice, what experimenters do in such a situation is conduct statistical randomness tests upon the strings obtained, but does this take us out of the realm of frequentism?
Can frequentism be saved in such a case?
One way to address this Question is to respond that frequentism is not the standard interpretation of probability in Physics, so it doesn't have to be saved. See Section 3.3 of this Stanford Encyclopedia of Philosophy page, for example, for an account of the woes of frequentism.
Instead, some form of Statistical hypothesis testing is the de facto standard. That's why one hears that an experiment is consistent or not consistent with a given (probabilistic) Physical theory at "the 3 sigma level", or "at 5 sigmas", etc., which is a concise way of reporting levels of statistical significance. If experimental results fall outside 5 sigmas, that is very strong but not absolutely compelling motivation to introduce a different probabilistic model that makes predictions that are more consistent with the statistics of experimental results.
Bayesian analysis fits into this understanding as a formulaic way to modify a given statistical hypothesis on the basis of experimental data, on the assumption that the statistical model is broadly correct, it's just the parameters that have to be determined. If the structure of a probabilistic model is not in sympathy with the data, however, Bayesian rules will not suggest, for example, a different space-time structure that will work better.
Get Answers For Free
Most questions answered within 1 hours.