Contents
Goodness of fit
The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g. to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov–Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-square test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares.
Fit of distributions
In assessing whether a given distribution is suited to a data-set, the following tests and their underlying measures of fit can be used:
Regression analysis
In regression analysis, more specifically regression validation, the following topics relate to goodness of fit:
Categorical data
The following are examples that arise in the context of categorical data.
Pearson's chi-square test
Pearson's chi-square test uses a measure of goodness of fit which is the sum of differences between observed and expected outcome frequencies (that is, counts of observations), each squared and divided by the expectation: where: The expected frequency is calculated by: where: The resulting value can be compared with a chi-square distribution to determine the goodness of fit. The chi-square distribution has (k − c) degrees of freedom, where k is the number of non-empty bins and c is the number of estimated parameters (including location and scale parameters and shape parameters) for the distribution plus one. For example, for a 3-parameter Weibull distribution, c = 4.
Binomial case
A binomial experiment is a sequence of independent trials in which the trials can result in one of two outcomes, success or failure. There are n trials each with probability of success, denoted by p. Provided that npi ≫ 1 for every i (where i = 1, 2, ..., k), then This has approximately a chi-square distribution with k − 1 degrees of freedom. The fact that there are k − 1 degrees of freedom is a consequence of the restriction \sum N_i=n. We know there are k observed bin counts, however, once any k − 1 are known, the remaining one is uniquely determined. Basically, one can say, there are only k − 1 freely determined binn counts, thus k − 1 degrees of freedom.
G-test
G-tests are likelihood-ratio tests of statistical significance that are increasingly being used in situations where Pearson's chi-square tests were previously recommended. The general formula for G is where O_i and E_i are the same as for the chi-square test, \ln denotes the natural logarithm, and the sum is taken over all non-empty bins. Furthermore, the total observed count should be equal to the total expected count:where N is the total number of observations. G-tests have been recommended at least since the 1981 edition of the popular statistics textbook by Robert R. Sokal and F. James Rohlf.
This article is derived from Wikipedia and licensed under CC BY-SA 4.0. View the original article.
Wikipedia® is a registered trademark of the
Wikimedia Foundation, Inc.
Bliptext is not
affiliated with or endorsed by Wikipedia or the
Wikimedia Foundation.