How to Find a Relative Average Deviation

••• Kuzma/iStock/GettyImages
Print

The relative average deviation (RAD) of a data set is a percentage that tells you how much, on average, each measurement differs from the arithmetic mean of the data. It's related to standard deviation in that it tells you how wide or narrow a curve plotted from the data points would be, but because it's a percentage, it gives you an immediate idea of the relative amount of that deviation. You can use it to gauge the width of a curve plotted from the data without actually having to draw a graph. You can also use it compare observations of a parameter to the best known value of that parameter as a way to gauge accuracy of an experimental method or measurement tool.

The relative average deviation of a data set is defined as the mean deviation divided by the arithmetic mean, multiplied by 100.

The elements of relative average deviation include the arithmetic mean (​m​) of a data set, the absolute value of the individual deviation of each of those measurements from the mean (|​di - ​m​|) and the average of those deviations (∆​dav). Once you've calculated the mean of the deviations, you multiply that number by 100 to get a percentage. In mathematical terms, the relative average deviation is: