The relative average deviation (RAD) of a data set is a percentage that tells you how much, on average, each measurement differs from the arithmetic mean of the data. It's related to standard deviation in that it tells you how wide or narrow a curve plotted from the data points would be, but because it's a percentage, it gives you an immediate idea of the relative amount of that deviation. You can use it to gauge the width of a curve plotted from the data without actually having to draw a graph. You can also use it compare observations of a parameter to the best known value of that parameter as a way to gauge accuracy of an experimental method or measurement tool.
TL;DR (Too Long; Didn't Read)
The relative average deviation of a data set is defined as the mean deviation divided by the arithmetic mean, multiplied by 100.
Calculating Relative Average Deviation (RAD)
The elements of relative average deviation include the arithmetic mean (m) of a data set, the absolute value of the individual deviation of each of those measurements from the mean (|di - m|) and the average of those deviations (∆dav). Once you've calculated the mean of the deviations, you multiply that number by 100 to get a percentage. In mathematical terms, the relative average deviation is:
Suppose you have the following data set: 5.7, 5.4. 5.5, 5.8, 5.5 and 5.2. You get the arithmetic mean by summing the data and dividing by the number of measurements = 33.1 ÷ 6 = 5.52. Sum the individual deviations:
Divide this number by the number of measurements to find the average deviation: 0.94 ÷ 6 = 0.157. Multiply by 100 to produce the relative average deviation, which in this case is 15.7 percent.
Low RADs signify narrower curves than high RADs.
An Example of Using RAD to Test Reliability
Although it's useful for determining the deviation of a data set from its own arithmetic mean, the RAD can also gauge the reliability of new tools and experimental methods by comparing them to ones you know to be reliable. For example, suppose you are testing a new instrument for measuring temperature. You take a series of readings with the new instrument while simultaneously taking readings with an instrument you know to be reliable. If you calculate the absolute value of the deviation of each reading made by the test instrument with that made by the reliable one, average these deviations, divide by the number of readings and multiply by 100, you'll get the relative average deviation. It's a percentage that, at a glance, tells you whether or not the new instrument is acceptably accurate.
About the Author
Chris Deziel holds a Bachelor's degree in physics and a Master's degree in Humanities, He has taught science, math and English at the university level, both in his native Canada and in Japan. He began writing online in 2010, offering information in scientific, cultural and practical topics. His writing covers science, math and home improvement and design, as well as religion and the oriental healing arts.