The power of a statistical test is the probability that the test will reject a false null hypothesis (that it will not make a Type II error). As power increases, the chances of a Type II error decrease. **The probability of a Type II error is referred to as the false negative rate (β). Therefore power is equal to 1 − β.**

Power analysis can either be done before (a priori) or after (post hoc) data is collected. A priori power analysis is conducted prior to the research study, and is typically used to determine an appropriate sample size to achieve adequate power. Post-hoc power analysis is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population.

The power of a test is the probability that the test will find a statistically significant difference between two populations, as a function of the size of the true difference between those two populations. Note that power is the probability of finding a difference that does exist, as opposed to the likelihood of declaring a difference that does not exist.

Power increases with:

- Sample size (n): sample reliability always depends upon its size (the smaller the sample, the larger the error). Thus, it is intuitively obvious that increases in sample size will increase statistical power.
- Effect size: The degree to which a specified alternative hypothesis deviates from the null hypothesis
- The statistical significance criterion used in the test (p-value)