# Why does percentage error decrease as units of measurement increase?

Asked by Sapphire_Frenzy (90) November 18th, 2012

I have calculated the percentage error for the measurements made in a Hooke’s Law experiment, and the results are as follows:

Extension (mm) | Percentage error (%)

22 | 2.27
44 | 1.14
66 | 0.75
88 | 0.57
109 | 0.46
131 | 0.38

A question I have to answer is ‘What do you notice about the percentage error as the measurement increases?’

Evidently, the percentage error decreases, but I don’t completely understand why this happens (I can’t put it into words anyway)

My question is why does this decrease in percentage error happen as the measurements increases?

(Note: This is NOT a question I have to answer on my work, it’s just something I’m intrigued about)

Observing members: 0 Composing members: 0

Well you can think of it in the sense that the longer something you measure is, the less the error is of the whole length. It’s an inverse proportional relationship if you look at the formula.

For instance if you use a mm marked ruler and measure something which is 20.0 mm, your uncertainty is then 0.5 mm (from the last defined mark being to millimeters), so you end up with 20.0 +/- 0.5 mm. Then you also measured something thats 30.0 mm you would get 30.0 +/- 0.5 mm.

The percentage error is=|measured-actual|/actual length=|error|/actual length

So for the 20.0mm=0.5/20.0=0.0250
while for the 30.0mm=0.5/30.0=0.0167

misserg (26)

@misserg‘s answer is correct. You can think of it in bigger numbers and it’s simpler to understand. If the error is 2 milimeters whether you measure 20 or 100, then 2 out of 100 (2% error) is much smaller than 2 out of 20 (10% error).

JLeslie (53205)

As a machinist who deals with this stuff for a living, I wholehearted concur with @misserg‘s answer.

Think of it this way; if you or I lose a \$10 bill, it’s a major thing, but if Warren Buffet loses \$10, it’s practically negligible. Same absolute amount, but a far smaller percentage.

jerv (31025)

or