Random error

Random error
See also

Random error is an error that occurs while the selected sample is an imperfect representation of the overall population [1]. As stated in Statistics for Business and Financial Economics random error is the difference between the value derived by taking a random sample and the value that would have been obtained by taking a census [2]

As stated in Marketing random error represents how accurately the chosen sample's true average (mean) value reflects the population's true average (mean) value. For example, we might take a random sample of beer drinkers in Chicago and find that 16 percent regularly drink Coors beer. The next day we might repeat the same sampling procedure and discover that 14 percent regularly drink Coors beer. The difference is due to random error [3].

Random error will always exist and is impossible to totally eliminate. It can be minimized and amount of random error can be predicted using statistics, which is often called precision [4].

Two types of measurement error

As stated in Key Concepts in Measurement there are two types of measurement error [5].:

  • Random error
  • Systematic error

Random error is a class of errors that is not correlated with the construct, other measures, or anything else under study. Random errors distribute symmetrically around the true value, with some observed scores being greater than the true score and others being less than the true score. Systematic error exists when measures concentrate around alternative values instead of the true value.

Sources of error

Common sources of random errors are problems estimating a quantity that lies between the graduations (the lines) on an instrument and the inability to read an instrument because the reading fluctuates during the measurement [6].

Sources of error [7]:.

  • Observer - unpredictable
  • Method of measurement - unreliable experimental procedure
  • Instrument:
  1. may be faulty, unreliable
  2. out of adjustment, not zeroed

According to Bakke S. the random error in experimental results is due to lack of observer precision, perhaps in misreading an analogue scale due to parallax. This will result in a spread of results, even in the most carefully designed of experiments. Due to the random nature of these errors, there is an equal chance that they will be above or below the ‘true’ value. To mitigate against such errors, it is correct technique to take many readings and find the mean, even in the simplest of experiments. Because it is impossible to know the ‘true’ value, the best estimate is the mean of repeat readings.

The random error (also called the mean deviation) is then a measure of the spread of the repeat readings:

Random error, ∆ran = R/N

  • R = range (maximum - minimum)
  • N = number of repeat readings

Random error is reduced by increasing the number of readings, N.

As N increases ∆ran decreases [8].

Reducing random variation

As stated in Sources of Error the major strategies for reducing the role of random error are [9].:

  • Increase sample size – a larger sample, other things being equal, will yield more precise estimates of population parameters;
  • Improve sampling procedures – a more refined sampling strategy, e.g., stratified random sampling combined with the appropriate analytic techniques can often reduce sampling variability compared to simple random sampling;
  • Reduce measurement variability by using strict measurement protocols, better instrumentation, or averages of multiple measurements.
  • Use more statistically efficient analytic methods – statistical procedures vary in their efficiency, i.e., in the degree of precision obtainable from a given sample size.

Test-Retest Reliability

According to Maruyama G., Ryan C. the source of random error that is the focus of concerns about reliability is expected to vary, rather than remain constant, from one occasion to another. Specific mental mistakes, slips of the pen, and the like would not recur if the test were repeated after some time delay. Therefore, the correlation between scores on the same measure administered on two separate occasions - a test-retest correlation - provides an estimate of the measure's reliability. The two occasions should be far enough apart so that respondents cannot remember specific responses from the test to the retest but close enough together so that change in the true score is expected to be minimal. A completely unreliable measure, in which all the variation in scores stems from random errors, would show a complete lack of correlation between test and retest. A perfectly reliable measure, in which no random error whatever affected the score, would produce scores that correlate perfectly over a short period of time [10].



  1. Lamb C., Hair J., McDaniel C. 2012 p. 330
  2. Lee C., Lee J., Lee A. 2000 p. 16
  3. Lamb C., Hair J., McDaniel C. 2012 p. 330
  4. Bolus N., Brady A. 2011 p. 44
  5. Perron B., Gillespie D. 2015
  6. Carlson G. 2002 p. 3
  7. Bakke S. 2019 p. 1
  8. Bakke S. 2019 p. 3
  9. Schoenbach V. 2001 p. 290
  10. Maruyama G., Ryan C. 2014 p. 195

Author: Paulina Wolnik