# How to Calculate Standard Errors ••• Jeff J Mitchell/Getty Images News/Getty Images
Print

The standard error indicates how spread out the measurements are within a data sample. It is the standard deviation divided by the square root of the data sample size. The sample may include data from scientific measurements, test scores, temperatures or a series of random numbers. The standard deviation indicates the deviation of the sample values from the sample mean. The standard error is inversely related to the sample size -- the larger the sample, the smaller the standard error.

Compute the mean of your data sample. The mean is the average of the sample values. For example, if weather observations in a four-day period during the year are 52, 60, 55 and 65 degrees Fahrenheit, then the mean is 58 degrees Fahrenheit: (52 + 60 + 55 + 65)/4.

Calculate the sum of the squared deviations (or differences) of each sample value from the mean. Note that multiplying negative numbers by themselves (or squaring the numbers) yields positive numbers. In the example, the squared deviations are (58 - 52)^2, (58 - 60)^2, (58 - 55)^2 and (58 - 65)^2, or 36, 4, 9 and 49, respectively. Therefore, the sum of the squared deviations is 98 (36 + 4 + 9 + 49).

Find the standard deviation. Divide the sum of the squared deviations by the sample size minus one; then, take the square root of the result. In the example, the sample size is four. Therefore, the standard deviation is the square root of [98 / (4 - 1)], which is about 5.72.

Compute the standard error, which is the standard deviation divided by the square root of the sample size. To conclude the example, the standard error is 5.72 divided by the square root of 4, or 5.72 divided by 2, or 2.86.

Dont Go!

We Have More Great Sciencing Articles!