1. Error sources

Errors originate from one of three places:

  • Natural - the conditions under which measurements are made.
  • Instrumental - the equipment with which we make the measurements.
  • Personal - the knowledge and skill of the people making the measurements.

We can control some of the errors within these sources, others we can’t. For example, while we can’t change the outside temperature we can compensate for it when measuring distances electronically.


2. Error types

While the source tells us where error comes from, it doesn’t tell us how it behaves; the error type does. There are technically three error types. “Technically” since some people don’t consider mistakes a type of error, but we’ll include it here as it does affect measurements if present.

a. Mistake

A mistake is usually the result of carelessness or misunderstanding. Mistakes are generally isolated and stick out in a measurement set. These are common for people learning to use equipment for the first time - something gets forgotten, a wrong button is pushed, digits are transposed, etc.

b. Systematic

A systematic error is one which conforms to some mathematical or physical principle. Because of this behavior a systematic error can be compensated, provided we have enough information. For example, a steel tape changes length based on temperature. We can correct for this as long as we know the steel’s physical characteristics, a calibration temperature and length, and the temperature at measurement.

In some cases we can compensate a systematic error procedurally. If the centered plate bubble runs two divisions when the instrument is rotated 180°, we bring it back one division. We'll discuss the reason more thoroughly in later sections on instruments.

c. Random

Random errors are those left when mistakes are eliminated and systematic errors are compensated. They are the only errors which prevent our knowing the true value of what we’re measuring.

Random errors tend to be small and as likely to be positive as negative. Repeating measurements multiple times give random errors a change to cancel. Statistics are used to model and analyze random error effects.

An example of how random errors tend to cancel is flipping a coin. Disregarding the apparent weight differential between the faces, there’s a 50-50 chance heads will come up. It’s possible that if we flip it twice it will come up heads both times. Flipping it three times we could get none, one, two, or three heads. The more we flip it, the more likely we’ll approach 50-50 on heads coming up. Theoretically, if you flip the coin an infinite number of times, keeping everything else the same, half the time it will come up heads. Don’t believe me? Give it a try, I’ll wait.


3. Putting It Together

Anyway, let’s revisit our targets, Figure C-1, and see if we can make some observations on error sources and type.

 img4
Figure C-1
Accuracy and Precision

 

  • Target (a) shooter was both precise and accurate. This means the shooter has a pretty good handle on all the errors.
  • Target (b) indicates the presence of a systematic error: shots are consistently low and left. To compensate, the shooter can either adjust the gun sights (mechanical) or aim high and right (procedural).
  • Target (c) most likely results from a minimal experience shooter (error source: personal). On average the shooter has accurate results, but it’s achieved inefficiently. As the shooter gains experience, the efficiency increases and the pattern should begin to look more like (a).
  • Target (d) can result from all sorts of error sources and types: inexperienced shooter (personal), windy conditions (natural), gun sights off (instrumental), throw in a possible mistake or two, and so on. 

4. Minimizing Errors

Now that we know where errors come from and how they behave, how do we deal with them in our measurements?

a. Mistakes

The only way to know if a mistake exists is to repeat the measurement. Having just two measurements which differ greatly only indicates that there is a mistake, it doesn’t tell us which measurement has the mistake. To find out, we need at least a third measurement. Hopefully the third one will be close to one of the first two. If it is, we’ve found our culprit and would toss the wrong measurement. If it’s not, then we have a dilemma on our hands - evidently we’re doing something wrong (think Target (c) or (d)).

How much of a difference is acceptable before a mistake is presumed? It depends on a number of things such as what’s being measured, what type of equipment is used, etc. Pacing a distance would have a higher mistake threshold than measuring the distance with a total station. In the old days, measuring distances with a 100 ft steel tape required the crew to keep a tally of full tape lengths. A common mistake was "dropping a tape" which showed up when the forward and back distances differed by about 100 feet. An obvious tally mistake. But what if the distances differed by about one foot?

We need to identify and eliminate mistakes because, as we’ll see in a bit, if we include those measurements, the mistake is spread into all the measurements degrading our good ones.

b. Systematic Errors

As we saw when introducing this error type, we can rid ourselves of them either mathematically or, in many cases, procedurally.
When you input the temperature and pressure in your total station, you are providing the information it needs to mathematically compensate for atmospheric effect on distance measurement. You may also have it set to compensate for earth curvature and refraction for longer distances. How about that prism offset?

In some cases, we don’t need to know the amount of systematic error if we can just get rid of it. Equipment maladjustment is a generally a systematic error. Being surveyors, and a clever group at that, we use specific measurement procedures which allow those maladjustments to cancel.

Example

A collimation error in a level means that the line of sight is not truly horizontal when the bubble is centered, Figure C-2. That means each rod reading will be too high or too low.

We can collimate the level to determine the error amount.

Once we have that, we can either adjust the level to eliminate the collimation error or we can mathematically apply it to correct a reading.

Or, because the error is a function of distance, we can balance our backsight (BS) and foresight (FS) distances and the error will go away:

 

d 20
img61a
Figure C-2
Compensating Systematic Error Procedurally

  

 As you learn equipment use, you will be taught specific procedures which help compensate systematic errors.

c. Random Errors
Using appropriate equipment under favorable conditions by an experienced field crew and repeating measurements is the best way to minimize random errors. This is why we spend so much time familiarizing ourselves with our equipment and measuring so many times.


An example of angle measurement standards are the precise traverse theodolite and angle specifications in the FGCS Standards and Specifications for Geodetic Control Networks. These are summarized in Tables C-1 and C-2.

Table C-1

 img7

 

Table C-2
 img8

 

Both tables demonstrate that to achieve higher accuracy, finer resolution equipment is needed along with additional measurements. The general relationship of random error magnitude and number of measurements is shown in Figure C-3.

img124

Repeating a measurement several times can result in a larger initial random error reduction. Adding more measurements further reduces error.

Eventually a point of diminishing returns is reached: additional measurements don't appreciably lower the error. It's up to the surveyor to determine when that point is reached based on the job and equipment available.

This relationship is non-linear so doubling measurements doesn't cut error in half. Plus the graph never reaches 0 error because there is always some error present.

Figure C-3
Repeated Measurements and Error
 

 

Because random errors are “small” and tend to cancel with repeated measurements we analyze them statistically. This is where we encounter terms like “standard deviation”, “95% confidence interval”, “least squares”, “rejection limit from the mean”, etc. We’ll look at this basic analysis in the next chapter.