How to Use Stats to Stand Out at the Science Fair

Analyzing your results properly is an easy way to stand out.
••• Hill Street Studios/DigitalVision/GettyImages

Winning the science fair means standing out from the competition.

Don’t get us wrong, creating an awesome baking soda volcano might turn a few heads. But you need to do something a bit more robust than that if you want to take the top prize, whether at your school or for the Google Science Fair.

As well as having a sensible and well-designed experiment, one of the most important things when you’re trying to draw a firm conclusion is analyzing your results accurately. Although you might not want to hear it – this isn’t most people’s favorite part of doing science – this means doing some basic statistics to see if any differences you observe are statistically significant or possibly just due to chance.

Don’t worry, though, performing statistical tests isn’t really difficult, but it’s one of the best ways to make your project really stand out to the judges.

Why Use Statistics

If you pick any variable – for example, height, spelling test scores or the number of successfully germinated seeds – there will always be some variation by chance alone. There is generally a distribution of results around some central value. This makes it a little bit difficult to really know whether or not an apparent difference between two results is actually important, or just due to this intrinsic variation. That’s what you use statistics for.

Statistical tests like the t-test and Pearson’s correlation coefficient give you the tools to separate out the effects of random chance from genuine effects beyond those expected by chance. For example, if you want to know if boys are taller than girls, you wouldn’t just compare the averages (more on that in a moment), you’d need to look at how the differences within a group compare to the differences between the groups.

Basic Statistical Measures

To use statistical tests for your science project, you’ll need to know a couple of basic things first. The first is pretty simple: the concept of a “mean,” which is what most people are talking about when they say “average.” This is simply the sum of a set of values divided by the number of values. So if you have five test scores: 20, 13, 18, 22 and 16, the mean is:

\begin{aligned} \text{mean} &= μ = \frac{20 + 13 + 18 + 22 + 16}{5} \\ &= 17.8 \end{aligned}

The other important concept is the standard deviation. This is a measure of the spread of values around the mean, and it’s used as part of many statistical tests. The formula for standard deviation is:

σ = \sqrt{\frac{1}{N} \sum(x_i - μ)^2}

This might look scary, but it’s pretty easy to calculate: start by working out the mean μ, and then subtract this value from each of the individual results (the xi in the equation), before squaring the answer. Now sum up all of these individual values, divide by the number of results (N), and finally take the square root of the answer.

Testing for a Difference: The t-Test

If you want to test for a difference in a certain variable between two groups – for example, the average height of boys vs. girls or test scores of students who’ve taken a recap course vs. those who haven’t – the t-test is one of the most commonly used statistical tests. It assumes that your data is normally distributed (like a bell curve – it probably will be, so you don’t have to worry about this too much), that the squares of the standard deviations (the “variance”) of each group is the same and that the observations are independent of each other.

To perform a t-test, you use the formula:

t = \frac{μ_1 - μ_2}{\sqrt{\frac{s_p^2}{n_1}+\frac{s_p^2}{n_2}}}

Now, all you need to know is what each of the symbols means. Firstly, the μ symbols are the means for the samples, the n values are the number of results in each group, and the sp values involve the standard deviations of the samples. This is a little more complicated and has a separate formula:

s_p^2 = \frac{(n_1 - 1)σ_1^2 + (n_2 - 1)σ_2^2}{n_1+n_2 - 2}

It’s generally easier to calculate this in pieces, starting with the sp2 value, and then put the value into the equation for t. The final step is looking up the result you get for t in a table (see Resources) for the appropriate significance level, which is usually 0.95 (if you’re testing for a difference in both directions, i.e. higher and lower, then either use a table for “two-sided” test or use the 0.975 value). You need to check the row for your number of degrees of freedom (your total sample size minus 2), and if your t value (ignoring any minus signs) is higher than the value in the table, you have found a significant difference.

Of course, this is really just the beginning: What do you do with the result when you’ve found it? The next part of this article will go in depth about interpreting your results.

Related Articles

How to Calculate Variance
How to Calculate Coefficient of Determination
Here's the Secret to *Really* Understanding Your Science...
How to Calculate Precision
What Does a Negative T-Value Mean?
How to Interpret an Independent T Test in SPSS
How to Calculate Statistical Significance
How to Calculate the Interquartile Range
How to Calculate Statistical Difference
The Advantages of Using an Independent Group T-Test
Calculate Average Deviation
How to Calculate ANOVA by Hand
How to Find Sample Standard Deviation
What Are Parametric and Nonparametric Tests?
Difference Between the Mean & the Average
How to Calculate Average Deviation From the Mean
How to Calculate Pearson's R (Pearson Correlations)...
How to Calculate Z-Scores in Statistics
How to Calculate a P-Value
How to Know if Something Is Significant Using SPSS