Many college programs require statistics. A key concept presented in a typical statistics class is the normal distribution of data or a bell curve. Understanding how to interpret a set of data that falls in a natural distribution makes understanding scientific studies possible. Obtain a good understanding of the bell curve, the mean, standard deviations and their relationship to percentiles to become conversant in the language of scientific research.
Normal Distribution and the Bell Curve
When many types of naturally occurring data such as height, intelligence quotients and blood pressure are plotted on a histogram, where the scores are on the horizontal axis and the occurrences or number of scores are on the vertical axis, the data falls into a bell-shaped pattern called a bell curve. This pattern, known as a normal distribution, lends itself to statistical analysis.
The Mean and Median
The mean average of all the scores will fall at the approximate middle of the bell curve. The mean represents the 50th percentile, where half of all scores are above that measure, and half are below. In normally distributed data, the median score will also fall in the center of the bell curve, representing the most occurrences.
Standard Deviations and Variance
How far away from the mean is a measure? In normally distributed sets of data, a measure can be described as being a certain number of standard deviations away from the mean. A standard deviation is a measure of variance, or how dispersed, or spread out, the data is from the mean. If measures have a lot of variance, the bell curve is spread out; if they have little variance, the bell curve is narrow. The more standard deviations away the score is, the less likely the score is to occur in nature.
Percentiles and the Empircal Rule
When looking at a bell curve, 68% of the measures lies within one standard deviation of the mean. 95% of the distribution lies within two standard deviations of the mean. A whopping 99.7% of the measures fall within three standard deviations of it. These percentages, termed the empirical rule, are the foundation of statistical analysis of naturally occurring phenomena. If a medical researcher, for instance, finds that a group that took a certain medication to control cholesterol now has measures of cholesterol two standard deviations from the mean, it would be unlikely to occur by chance.