## 1. Concepts

Once mistakes have been removed and systematic errors compensated, the only errors left are random. They tend to be small and as likely to be positive as negative. If enough measurements are made (under the same conditions, with the same equipment) they would cancel out.

Random errors behave according to laws of probability. An example is flipping a coin. Disregarding the apparent weight differential between the faces, there’s a 50-50 chance heads will come up. It’s possible that if we flip it twice it will come up heads both times. Flipping it three times we could get none, one, two, or three heads. The more we flip it, the more likely we’ll approach 50-50 on heads coming up. Theoretically, if you flip the coin an infinite number of times, keeping everything else the same, half the time it will come up heads. Don’t believe me? Give it a try, I’ll wait.

Anyway, the more times something is measured, the better the result. Unfortunately we don't have the luxury of time to make an infinite number of measurements (nor a client willing to pay for it). How many times to measure depends on overall accuracy needed and equipment used.

Statistics are used to analyze or predict random error behavior. Statistics allows precision determination and accuracy prediction for a measurement set. They can also help plan how measurements should be made to meet an expected accuracy level.

Random error behavior and analysis can get complicated very quickly particularly when mixed-quality measurements are combined in a network. Our objective is to understand basic terminology and analysis tools. If you are interested in more advanced aspects of random error behavior, there are a few very good textbooks on the subject.