The Difference Between Systematic & Random Errors

••• MadamLead/iStock/GettyImages

No matter how careful we are when conducting experiments, there will almost certainly be uncertainty in our results. No experimental apparatus is perfect, and avoiding error altogether is practically impossible because our world is full of countless idiosyncrasies and unpredictable factors. To counteract this issue, scientists do their best to categorize errors and quantify any uncertainty in measurements they make. Systematic and random errors are a key part of learning to design better experiments, and finding out how to quantify and minimize these two types of error can lead to more concrete and reliable results.

TL;DR (Too Long; Didn't Read)

Random errors are unavoidable and result from the inevitable variation when taking measurements or attempting to record quantities in the world. These errors will fluctuate, but they generally cluster around the true value. These types of measurement error are crucial to accurately reporting scientific findings.

Systematic errors usually result from uncalibrated equipment, environmental influence, or models that rely on specific parameters that may cause systematic bias. Every measurement you take will be wrong by the same amount – an offset error.

What Is Random Error?

Random error describes measurement errors that fluctuate due to the unpredictability or uncertainty inherent in your measuring process. Environmental factors and simple variation in experimental processes can result in chance differences between results; this is the source of random error.

A scientist measuring an insect, for example, might try to position the insect at the zero point of a ruler or measuring instrument to read the value at the other end. The ruler itself will probably only measure down to the nearest millimeter, and reading this with precision can be difficult. You may underestimate the true size of the insect or overestimate it, based on how well you read the scale and your judgment as to where the head of the insect stops. The insect might also move ever so slightly from the zero position without you realizing. Repeating the measurement multiple times yields many different results because of this, but they would likely cluster around the true value.

This observational error is unavoidable; even with the most precise machines, instrumental error will always result in some unknown impact on measured values.

Similarly, taking measurements of a quantity that changes from moment to moment leads to random error. Wind speed, for example, may pick up and fall off at different points in time. If you take a measurement one minute, it probably won’t be exactly the same a minute later. Again, repeated measurements will lead to results that fluctuate but cluster around the true value. With time dependent variables however, the true measurement might also fluctuate unpredictably with time, so the best we can do is represent the general changes and accept any inaccuracy.

What Is Systematic Error?

A systematic error is an additive source of error that results from a persistent issue, and it leads to a consistent error in your measurements. For example, if your measuring tape has been stretched out, your results will always be lower than the true value. Similarly, if you’re using scales that haven’t been set to zero beforehand, there will be a systematic error resulting from the mistake in the calibration (e.g., if a true weight of 0 reads as 5 grams, 10 grams will read as 15 and 15 grams will read as 20).

Some types of systematic error include instrumental error, environmental error, and predicted/theoretical error. These sources of systematic error all contribute some set quantity of uncertainty to every measurement, and the magnitude of error will depend on the source of the systematic error.

Random vs Systematic Error

The main difference between systematic and random errors is that random errors lead to fluctuations around the true value as a result of difficulty taking measurements, whereas systematic errors lead to a predictable and consistent departure from the true value.

Random errors are essentially unavoidable, while systematic errors are not. Scientists can’t take perfect measurements, no matter how skilled they are. If the quantity you’re measuring varies from moment to moment, you can’t make it stop changing while you take the measurement, and no matter how detailed your scale, reading it accurately still poses a challenge. The good news is that repeating your measurement multiple times and taking the average effectively minimizes this issue. Random error is proportional to the sample size of your measurements (or the number of data points you have). As such, we can reduce such errors by taking as many data samples as reasonable for a specific situation.

Systematic errors may be difficult to spot. This is because everything you measure will be wrong by the same (or a similar) amount and you may not realize there is an issue at all. However, unlike random errors they can often be avoided altogether. If we set up experimentation carefully and analyze results rigorously, systematic errors will be much less likely.

Reporting Random Error

Since random error is an unavoidable aspect of any scientific results, it is important to be able to accurately report the random error for any given experiment. We do this with statistical values like mean, standard deviation, and standard error. There are multiple ways to represent the distribution of a data set, but these three metrics are widely applicable to almost any data set.

Mean

The mean of a data set is simply the sum of all recorded values divided by the number of measurements:

\mu = \frac{\sum_{i=1}^Na_i}{N}

where the set A is all recorded values and N is the size of the sample.

Standard Deviation

The standard deviation describes the general distribution of the data (i.e how spread out the results were):

\sigma = \sqrt{\frac{\sum_{i=1}^{N}{(a_i-\mu)^2}}{N}}

Standard Error

Standard error is often how the error for the mean value of a data set is reported as a final result. It represents how other data sets would be expected to compare to this instance of experimental results. The formula is based on sample size and standard deviation:

\text{Standard Error} = \frac{\sigma}{\sqrt{N}}

Related Articles

What Is a Constant Error?
Difference Between Constant & Proportional Error
How to Calculate the Accuracy of Measurements
How to Calculate Percent Accuracy
What Are Some Reasons for Density Errors?
How to Calculate Temperature Uncertainty
Advantages & Disadvantages of Finding Variance
What Are the Advantages & Disadvantages of Using Ordinal...
The Disadvantages of Analog Multimeters
Here's the Secret to *Really* Understanding Your Science...
How to Convert Relative Uncertainty to Absolute Uncertainty
How to Improve Your Precision in the Lab
How to Calculate P-hat
How to Read a Western Blot
How to Calculate Reliability & Probability
What Is a Constant in the Scientific Method?
How to Calculate Experimental Value
How to Calculate Bias
How to Calculate Uncertainty