The T test was developed by William Sealy Gosset in 1908 as a way to tell if the difference between two sets of information was statistically significant. It is used to determine if the change in two sets of data, which may be in a graph or table form, is statistically significant. Generally one set of data is the “control,” or the data that no new treatment has been applied to. The other set of data is the “treatment,” or “experimental” data.
Find the mean of the first set of data. To do this, add all the values together and divide by the number of values you have.
Subtract each value by the mean. Some of the values you get will be negative. Take each value you just calculated and square it. Add all these values together. This is known as the sum of squares.
Divide the sum of squares by the number of values minus one. This is called the variance of the first set of values.
Repeat the above steps with the second set of data.
Subtract the control group mean from the experimental group mean. Save this calculation.
Divide the variance of each set of data by the number of values. Add the two resulting numbers together.
Calculate the square root of the number you found in the above step.
Take the number you got when you subtracted the two means and divide it by the square root you found in the above step. This is your T value.
If you are given the standard deviation, the variance is simply the standard deviation squared.