Type I statistical error: definition


When posing a question to be studied, the null hypothesis is the hypothesis that there is no difference between two populations. Wrongly rejecting the null hypothesis, or stating that there is a statistically significant difference in the data when in fact there is not (false positive), is called type I error or alpha error. The probability of type I error depends on the level of significance assigned by the investigator and the existence or nonexistence of a difference between the two experimental conditions. The smaller the chosen alpha value, the smaller is the likelihood of making a type I error. A probability value (p-value) should be chosen before collecting data, and is most typically .05 in biomedical research. A p-value of .05 means that there is a 5% chance of making a type I error.

A type I error, exists if the Null Hypothesis is incorrectly rejected. A False Alarm. False Positive. Positive pregnancy test on a non pregnant patient.

A level of significance of 5%, or 1 in 20, is arbitrary set. 5% chance of making a type I error. If p is probability and p <0.05, there is 5% chance that an observed difference occurred because of chance.

Why not always set a very small alpha value? The consequence of setting a p-value of .01 versus .05 is that there is an increased risk of making a type II or beta error. This is a failure to reject the null hypothesis when it is in fact false. The smaller the p-value, the more likely one is to make a type II error. The power of a test is 1-beta. The probability of making a type II error depends on 4 factors. 1) size of alpha (as discussed) 2) variability within a population (more variability results in greater likelihood of type II error) 3) sample size (more subjects results in less chance of type II error) and 4) the magnitude of difference between the experimental conditions (smaller differences result in higher likelihood of type II error).

Keyword history