to the 16th Edition IEE Regulations

chapter 5

chapter 6

Inspection and Testing
  8.1 - Introduction 8.5 - Insulation tests
  8.2 - Inspection 8.6 - Earth testing
  8.3 - Testing sequence 8.7 - Test instrument requirements
8.4 - Continuity tests 8.8 - Supporting paperwork

8.7.2 - Accuracy and resolution

The sub-section above has indicated the levels of accuracy and resolution required of the instruments needed to test an electrical installation. The purpose of this subsection will be to explain the meaning of these two terms.

This term describes how closely the instrument is able to produce and display a correct result, and is usually expressed as a percentage. For example, if a voltage has a true level of 100 V and is measured by a voltmeter as 97 V, this is an error of -3 V. Expressed in terms of the true voltage, it is an error of three volts in one hundred volts, or three per cent (3%). In this case the reading is low, so the true error would be -3%. Had the error been +3% the reading would have been 103 V. In most cases we do not know if the reading is high or low, so the error is expressed as a percentage which may be positive or negative. Thus, if the voltmeter gave a reading of 100 V but was known to have an accuracy of 4%, the actual voltage could lie anywhere in the range from:

100 + (4/100) x l00Vto 100 - (4/100) x 100 V
or 100 + 4 V to 100 - 4 V
which is between 104 V and 96 V

The values given in the Guide are called basic instrument accuracy's which indicate the possible error with the instrument itself. In practice, there are many factors which affect the value which is to be measured, and which will further reduce the accuracy. These are divided into two types.

Instrument errors are largely due to the fact that the true error is not constant, varying from point to point over the instrument range. Other factors, such as battery voltage, ambient temperature, operator's competence, and the position in which the instrument is held or placed (such as vertical or horizontal) will also affect the reading.

Field errors concern external influences which may also reduce accuracy, and may include capacitance in the test object, external magnetic fields due to cables and equipment, variations in mains voltage during the test period, test lead resistance, contact resistance, mains pickup, thermocouple effects, and so on.

It is important to appreciate that percentage accuracy is taken in terms of the full scale reading of an analogue instrument, or the highest possible reading of a digital type. Thus, in a multi-range instrument, it is related to the scale employed, not to the reading taken. For example, if an ohmmeter with a known error of 5% is on its 100 Ohms scale and reads 8 Ohms, the true reading will lie between

8 +
100 x 5
Ohms = 8 + 5 Ohms =13 Ohms and
8 -
100 x 5
Ohms = 8 - 5 Ohms = 3 Ohms
and not between 8
8 x 5
Ohms = 8 0.4 W or 7.6 W and 8.4 W

Thus it can be seen that the highest accuracy will result from using the lowest possible scale on a multi-range instrument.

This term deals with the ability of an instrument to display a reading to the required degree of accuracy. For example, if we were measuring the earth-fault loop impedance of a socket outlet circuit protected by a 30 A miniature circuit breaker type 2, we would need to ensure that the impedance value was not more than 1.14 Ohms {Table 5.2}. If this were done using a digital meter with three digits and a lowest range of 99.9 Ohms, we could obtain a reading of 1.1 Ohms or another of 1.2 Ohms, but not 1.14 2. This would indicate that the instrument resolution was to the nearest 0.1 Ohm, which usually is not close enough for electrical installation measurements. If the same three-digit instrument had a lower scale of 9.99 Ohms, it would be capable of reading 1.14 Ohms and would have a resolution of 0.01.



Return to top of page

Extracted from The Electricians Guide Fifth Edition
by John Whitfield
Published by EPA Press Click Here to order your Copy

Click here for list of abbreviations