How to Calculate a T-Score

By Eric Bank; Updated April 24, 2017
T-scores help you understand the relationships between two groups of data.

You use a t-score—also known as a Student's t-distribution—when you want to estimate a population statistic, such as the mean, or compare two groups of data. The t-score gives good results from small samples as long as the sample data form a bell-shaped curve when graphed. You can also use the t-score on skewed data if the sample contains at least 16 observations. Your confidence in the inferred relationships between the two groups increases as the sample size grows.

Basic Concepts

Typically, you use t-scores to check a hypothesis stating that a sample statistic, such as the mean, differs from that of a population or another sample. The t-score depends on the sample size, its mean and its standard deviation, which is a measure of how much variation there is in the data away from the mean. A low standard deviation indicates that the data cluster closely around the mean, whereas a higher standard deviation indicates that the data are spread out over a larger range of values. Once you calculate a t-score, you can use it to determine whether to accept or reject your hypothesis.

Three Types of T-Tests

Three different types of t-tests compare statistics from two groups of data. A one-sample t-test compares a sample to the overall population from which the sample was drawn. For example, you can use a sample of the weight of 50 Boston Terriers to estimate the mean weight of all dogs of this breed. An independent t-test is used when comparing two unrelated samples, and a dependent t-test is used when examining matched or related samples. The actual value of the t-score is not important. Rather, you use the t-score's value to determine whether the difference between the two groups is statistically significant. If the difference is significant, you reject the premise, known as the null hypothesis, that there is no difference between the two groups.

The T-Score Formula

The t-score is a fraction. The numerator is the sample's mean minus mean of the group you are comparing the sample to, and the denominator is the sample's standard deviation divided by the square root of the sample size. For example, suppose you have a 15-observation sample of how long an electric water softener's chemicals last before you need to replenish them. The average interval for the population is expected to be 300 days, but you find that your sample has a mean interval of 290 days and a standard deviation of 50 days. The t-score is then (290–300)/(50/15^0.5) or –0.775.

Hypothesis Testing

You can use a t-score to test the hypothesis that the mean or standard deviation of the sample is statistically different from that of another group—another sample or the overall population—at a given level of confidence. For example, you might perform a one-sample study to test the null hypothesis, with 95 percent certainty, that the population mean replenishment time is 300 days. To complete this study, you might use the t-score to find a p-value. If the p-value is less than the confidence level, which is equal to 1 minus the probability percentage you want to achieve—in this case (1–0.95) or 0.05—then the t-score is not statistically significant, and you can't reject your null hypothesis.

About the Author

Based in Chicago, Eric Bank has been writing business-related articles since 1985, and science articles since 2010. His articles have appeared in "PC Magazine" and on numerous websites. He holds a B.S. in biology and an M.B.A. from New York University. He also holds an M.S. in finance from DePaul University.