If you want to win your science fair, statistically analyzing your data is a great way to stand out from the competition, but when you get the result – say *P* = 0.04 – what does it actually *mean*? You can do all the math from the first part of this post, but if you don’t truly understand the numbers statistical tests return, you still don’t really know what your experiment found.

For example: Can you reject the “null hypothesis” based on your result? What does that even mean? Is it possible your finding is due to chance? What does a correlation tell you about the relationship between two variables? These are the types of questions you’ll need to answer to get the interpretation of your science fair results right.

## The Null Hypothesis

Whenever you do statistics, you’re pitting the “null hypothesis” against your “experimental hypothesis.” The null hypothesis is always basically the same: There is no relationship between the things you’re testing. In scientific experiments, you assume the null hypothesis is true until you have sufficient evidence to refute it. In other words, you don't assume you'll get a certain result from your experiments — you assume your hypothesis isn't true until the scientific results tell you otherwise.

Confused? Here's an example. Say you're doing a science project to find out if dogs are right- or left-handed. Your null hypothesis might be that dogs have no dominant paw. From there, your results will tell you whether your null hypothesis is true, or whether dogs seem to be right- or left-handed.

But how can you tell the difference between real results and what might happen by pure chance? Statistics, of course!

Determining what evidence is “sufficient” is the job of statistical tests, and because you’re testing the null hypothesis, it’s best to define exactly what it is for your experiment. You should really do this before you start your work, but even if you’ve focused on your experimental hypothesis (the relationship you suspect might actually exist) it’s easy to put together a null hypothesis after the fact.

## P Values and Statistical Significance

If your experiment gives you sufficient cause to reject the null hypothesis, this is called a “statistically significant” result. But, as with most things in science, there is a very specific definition of what this actually means, and you should be clear about it when you’re looking at your science fair results. The definition comes down to the meaning of the *P* value you get from your statistical test.

The *P* value is often misinterpreted to mean “the probability that the result is due to chance,” and although this is close to the meaning it is *not actually true*. The *P* value instead tells you the chance that, if the null hypothesis was true, you would obtain your result due to random statistical noise. For example, if you were testing whether a coin was unevenly weighted (with a null hypothesis that it is a fair coin), a result of 45 heads to 55 tails would be fairly likely from flipping a fair coin due to general statistical variation, and this is what the *P* value quantifies.

The “significance level” is a cut-off value for *P* – anything below this is considered sufficiently unlikely for you to reject the null hypothesis. This is usually chosen as *P* = 0.05 (so there would only be a 5% chance that your results would be obtained in a world where the null hypothesis was true), but ultimately this is just a convention. In some circumstances, a significance level of *P* = 0.10 is perfectly fine, and in others, scientists “raise the bar” a little and set a more strict cut-off of *P* = 0.01. It’s usually best to just stick to *P* = 0.05, but understand that there's variation sometimes.

## Interpreting Correlations

If you’re testing for a difference between two groups, understanding the meaning of statistical significance is enough, but if your test involves correlations between two variables (for example, the amount of light a plant receives and how tall it grows, or the number of previous attempts and your score at a game), things are a little bit different. Tests for correlations return values between −1 and +1, and understanding these and what either type of correlation implies for causality is essential to interpreting your results.

Firstly, the correlation score is easy to understand if you consider the extreme cases. Any positive correlation value means that both variables increase *together*, and a value of +1 is a *perfect* correlation, where the graph of one variable against another is straight line. In the same way, any minus correlation value means that when one variable increases, the other decreases, and a value of −1 is a perfect negative correlation. Finally, a value of 0 means there is no correlation at all. Of course, most results will be a decimal (like 0.65), with larger values (higher numbers, either positive or negative) meaning a stronger correlation.

However, a key caveat is that *correlation does not imply causation*. In other words, just because two things are correlated doesn’t mean that one causes the other, and you shouldn’t be tempted to draw such a conclusion in your writeup on the basis of a correlation alone. A good example is a correlation between yellow teeth and lung cancer: It isn’t that yellow teeth *cause* lung cancer; it’s that smoking causes both yellow teeth and lung cancer. In the same way, your results could be due to another factor you haven’t considered, so it’s always risky to make causal claims without very strong evidence beyond a simple correlation.

With these points in mind, whatever your science fair project, you should be able to do the statistics you need to *and* explain exactly what they show. You might not win, but what you’ve learned gives you the tools you need to really get the judges’ attention.