Statistics tests which analyse data can be divided into two groups: Parametric and non-parametric. One of the most common questions students ask me is what’s the difference between parametric and non-parametric tests and why is the distinction important?
Why the distinction is important
The distinction is important because if you use the wrong statistics test you could:
1) find a correlation (or cause and effect relationship) when there isn’t one
2) not find a correlation(or cause and effect relationship) when there is one
3) find a strong correlation (or cause and effect relationship) when there’s only a weak correlation or vice versa
In other words, use the wrong test and the produced statistics could be misleading. The choice of test you use is sometimes a tricky one and the right choice can sometimes be a subjective matter of opinion.
To oversimplify things you could say that parametric tests make assumptions about the data being examined. Non-parametric tests have fewer assumptions (some assumptions about symmetry in some tests but don’t worry about that for now). You want to use parametric tests when possible. The reason for this is that parametric tests have more ‘power’ because they produce more accurate results if the assumptions they make are correct. These assumptions can be false so non-parametric tests are often considered more ‘robust’.
Parametric tests are generally appropriate to use when data is normally distributed (for explanation about normality distributions, see: https://dtpsychology.wordpress.com/2013/03/23/115/). For this reason, parametric tests are sometimes referred to as ‘distribution tests’. Non-parametric tests are often called ‘distribution free tests’. Parametric tests are usually appropriate when examining either cardinal/interval data or ratio data. Non-parametric tests are usually appropriate when examining ordinal or nominal data (for explanations about types of data see: https://dtpsychology.wordpress.com/2013/03/21/the-four-levels-of-measurement-noir-understanding-the-differences-between-types-of-data/). Non-parametric tests do not require numerical data. Individual values can be ranked and this means that outliers are less likely to have a disproportionate influence on results. However, ignoring the size differences between values is not always a good idea and may decrease the accuracy of test results.
Summary: The following are my own definitions:
Parametric (distribution) tests refer to statistical analysis tests that are generally appropriate to use when the data being examined is interval or ratio and is based on a large population sample and/or produces an identifiable Gaussian function or bell-shaped curve indicating a normality of distribution.
Non-parametric (distribution free) tests refer to statistical analyses tests which are less powerful than parametric tests but generally appropriate to use when the data being examined is ordinal or nominal and is based on a small population sample or does not have a clear Gaussian function.
If you understand those definitions then you understand the difference between parametric and non-parametric. If parametric assumptions are met you use a parametric test. If they’re not met you use a non-parametric test. If assumptions are partially met, then it’s a judgement call. In general, try and avoid non-parametric when possible (because it’s less powerful). Hope that helps.