Conventional
statistical procedures are also called parametric tests. In a
parametric test a sample statistic is obtained to estimate the
population parameter. Because this estimation process involves a sample,
a sampling distribution, and a population, certain parametric
assumptions are required to ensure all components are compatible with
each other.
For example, in Analysis of Variance (ANOVA) there are three assumptions:
Why are they important? Take ANOVA as an example. ANOVA is a procedure of comparing means in terms of variance with reference to a normal distribution. The inventor of ANOVA, Sir R. A. Fisher (1935) clearly explained the relationship among the mean, the variance, and the normal distribution: "The normal distribution has only two characteristics, its mean and its variance. The mean determines the bias of our estimate, and the variance determines its precision." (p.42) It is generally known that the estimation is more precise as the variance becomes smaller and smaller.
Put it in another way: the purpose of ANOVA is to extract precise information out of bias, or to filter signal out of noise. When the data are skewed (non-normal), the means can no longer reflect the central location and thus the signal is biased. When the variances are unequal, not every group has the same level of noise and thus the comparison is invalid. More importantly, the purpose of parametric test is to make inferences from the sample statistic to the population parameter through sampling distributions.
When the assumptions are not met in the sample data, the statistic may not be a good estimation to the parameter. It is incorrect to say that the population is assumed to be normal and equal in variance, therefore the researcher demands the same properties in the sample. Actually, the population is infinite and unknown. It may or may not possess those attributes. The required assumptions are imposed on the data because those attributes are found in sampling distributions. However, very often the acquired data do not meet these assumptions. There are several alternatives to rectify this situation.
For example, in Analysis of Variance (ANOVA) there are three assumptions:
- Observations are independent.
- The sample data have a normal distribution.
- Scores in different groups have homogeneous variances.
Why are they important? Take ANOVA as an example. ANOVA is a procedure of comparing means in terms of variance with reference to a normal distribution. The inventor of ANOVA, Sir R. A. Fisher (1935) clearly explained the relationship among the mean, the variance, and the normal distribution: "The normal distribution has only two characteristics, its mean and its variance. The mean determines the bias of our estimate, and the variance determines its precision." (p.42) It is generally known that the estimation is more precise as the variance becomes smaller and smaller.
Put it in another way: the purpose of ANOVA is to extract precise information out of bias, or to filter signal out of noise. When the data are skewed (non-normal), the means can no longer reflect the central location and thus the signal is biased. When the variances are unequal, not every group has the same level of noise and thus the comparison is invalid. More importantly, the purpose of parametric test is to make inferences from the sample statistic to the population parameter through sampling distributions.
When the assumptions are not met in the sample data, the statistic may not be a good estimation to the parameter. It is incorrect to say that the population is assumed to be normal and equal in variance, therefore the researcher demands the same properties in the sample. Actually, the population is infinite and unknown. It may or may not possess those attributes. The required assumptions are imposed on the data because those attributes are found in sampling distributions. However, very often the acquired data do not meet these assumptions. There are several alternatives to rectify this situation.