News

Measuring Instruments


Measuring Instruments

Measurement of process parameters is an important aspect for the control of a process. Examples of the process parameters which are to be measured for control are pressure, temperature, flow, mass, length, and level etc.

Measurement is the process by which one can convert physical parameters to meaningful numbers. It is the process of determining the quantity, degree, capacity by comparison (direct or indirect) with the accepted standards of the system units being used. It is a process of comparing an unknown quantity with an accepted standard quantity. Instrument is a device or mechanism for determining values or magnitude of a quantity or variable. It is used to determine the present value of a quantity under observation. The measuring instrument is not to influence the quantity which is to be measured.

The measuring process is one in which the property of an object or system under consideration is compared to an accepted standard unit, a standard defined for that particular property. A measuring instrument can determine the magnitude or value of the quantity to be measured. The measuring quantity can be voltage, current, power, and energy etc.

Measurement can be done by direct method of measurement or indirect method of measurement. In the direct method of measurement, the value of a quantity is obtained directly by comparing the unknown with the standard. Direct methods are common for the measurement of physical quantities such as length, mass and time. It involves no mathematical calculations to arrive at the results, for example, measurement of length by a graduated scale. The method is not very accurate since it depends on human insensitiveness in making judgment. In the indirect method of measurement, several parameters (to which the quantity to be measured is linked with) are measured directly and then the value is determined by mathematical relationship. For example, measurement of density by measuring mass and geometrical dimensions.



Measuring instrument is the device which is used for comparing the unknown quantity with the unit of measurement or standard quantity. It can be defined as a machine or system which is designed to maintain functional relationship between prescribed properties of physical variables and can include means of communication to human observer.

Measuring instruments are the technical objects which are specially developed for the purpose of measuring specific quantities. A general property of measuring instruments is that their accuracy is known. Measuring instruments are divided into material measures, measuring transducers, indicating instruments, recording instruments, and measuring systems.

Measuring instruments can be either (i) analog instrument, or (ii) digital instrument. In the analog instrument, the measured parameter value is displayed by the moveable pointer. The pointer keeps moving continuously with the variable parameter / analog signal which is measured. The reading is inaccurate because of parallax error (parallel) during the reading. In the digital instrument, the measured parameter value is displayed in decimal (digital) form and the reading can be read through in numbers form. Hence, the parallax error is not existed and terminated. The concept used for digital signal in a digital instrument is logic binary ‘0’ and ‘1’.

The key functional element of the instrument is the sensor, which has the function of converting the physical variable input into a signal variable output. Signal variables have the property that they can be manipulated in a transmission system, such as an electrical or mechanical circuit. Because of this property, the signal variable can be transmitted to an output or recording device which can be remote from the sensor. In electrical circuits, voltage is a common signal variable. In mechanical systems, displacements or force are normally used as signal variables. If the signal from sensor output is small, it is required to be amplified. In several cases, it is also necessary to provide a digital output for connection with a computer-based data acquisition system.

The signal output from the sensor ‘S’ can be displayed, recorded, or used as an input signal to some secondary device or system. In a basic instrument, the signal is transmitted to a ‘display or recording device’ where the measurement can be read by a human observer. The observed output is the measurement ‘M’. There are several types of display devices, ranging from simple scales and dial gauges to sophisticated computer display systems. The signal can also be used directly by some larger system of which the instrument is a part. Fig 1 shows signal processing in instruments.

Fig 1 Signal processing in instruments

A measuring system exists to provide information about the physical value of some variable being measured. In simple cases, the system can consist of only a single unit which gives an output reading or signal according to the magnitude of the unknown variable applied to it. However, in more complex measurement situations, a measuring system consists of several separate elements as shown in Fig 2. These components can be contained within one or more boxes, and the boxes holding individual measurement elements can be either close together or physically separate.

Fig 2 Elements of a measuring system

Measuring instruments are classified both as per the quantity measured by the instrument and the principle of operation. There are three normal principles of operation namely (i) electromagnetic, which utilizes the magnetic effects of electric currents, (ii) electrostatic, which utilizes the forces between electrically-charged conductors, and (iii) electro-thermic, which utilizes the heating effect. The essential requirements of measuring instruments are that (i) it is not to alter the circuit conditions, and (ii) it is to consume very small quantity of power.

Measuring instruments can be divided into two categories namely (i) absolute instruments, and (ii) secondary instruments. Absolute instruments give the quantity to be measured in term of instrument constant and its deflection. In case of secondary instruments, the deflection gives the magnitude of electrical quantity to be measured directly. These instruments are needed to be calibrated by comparing with another standard instrument before putting into use.

Secondary instruments can be classified into three types namely (i) indicating instruments, (ii) recording instruments, and (iii) integrating instruments. Indicating Instruments indicate the magnitude of an electrical quantity at the time when it is being measured. The indications are given by a pointer moving over a graduated dial. Recording instruments keep a continuous record of the variations of the magnitude of an electrical quantity to be observed over a defined period of time.  Integrating instruments measure the total quantity of either quantity of electricity or electrical energy supplied over a period of time e.g., energy meters. Fig 3a shows types of measuring instruments.

Fig 3 Types of measuring instruments and damping characteristics

Indicating instruments consist essentially of a pointer which moves over a calibrated scale and which is attached to a moving system pivoted in bearing. The moving system is subjected to three torques namely (i) a deflecting (or operating) torque, (ii) a controlling (or restoring) torque, and (iii) a damping torque. The deflecting torque is produced by making one of the magnetic, heating, chemical, electrostatic and electromagnetic induction effect of current or voltage and cause the moving system of the instrument to move from its zero position. The method of producing this torque depends upon the type of instrument.

In case of controlling torque, the magnitude of the moving system is somewhat indefinite under the influence of deflecting torque, unless the controlling torque existed to oppose the deflecting torque. It increases with increase in deflection of moving system. Under the influence of controlling torque, the pointer returns to its zero position on removing the source producing the deflecting torque. Without the controlling torque, the pointer swings at its maximum position and does not return to zero after removing the source. Controlling torque is produced either by spring or with the gravity control.

In case of spring control, when the pointer is deflected, one spring unwinds itself while the other is twisted. This twist in the spring produces restoring (controlling) torque, which is proportional to the angle of deflection of the moving systems. In spring control, normally two springs are attached on either end of spindle. The spindle is placed in jewelled bearing, so that the frictional force between the pivot and spindle is minimum. Two springs are provided in opposite direction to compensate the temperature error. The spring is made of phosphorous bronze. When a current is supplied, the pointer deflects because of the rotation of the spindle. While spindle rotates, the spring attached with the spindle opposes the movements of the pointer. The torque produced by the spring is directly proportional to the pointer deflection.

In case of the gravity-controlled instruments, a small adjustable weight is attached to the spindle of the moving system such that the deflecting torque produced by the instrument acts against the action of gravity. Hence, a controlling torque is obtained. This weight is called the control weight. Another adjustable weight is also attached is the moving system for zero adjustment and balancing purpose. This weight is called ‘balance weight’.

In case of damping torque, it is seen that the moving system of the instrument tends to move under the action of the deflecting torque. But on account of the control torque, it tries to occupy a position of rest when the two torques are equal and opposite. However, because of the inertia of the moving system, the pointer does not come to rest immediately but oscillates about its final deflected position and takes appreciable time to come to steady state. For overcoming this difficulty, a damping torque is to be developed by using a damping device attached to the moving system. The damping torque is proportional to the speed of rotation of the moving system. The damping torque is produced by (i) air friction damping, (ii) fluid friction damping, (iii) eddy current damping, and (iv) electromagnetic damping.

In case of air friction damping, the piston is mechanically connected to a spindle through the connecting rod. The pointer which is fixed to the spindle moves over a calibrated dial. When the pointer oscillates in clockwise direction, the piston goes inside and the cylinder gets compressed. The air pushes the piston upwards and the pointer tends to move in anti-clockwise direction. If the pointer oscillates in anti-clockwise direction the piston moves away and the pressure of the air inside cylinder gets reduced. The external pressure is more than that of the internal pressure. Hence, the piston moves downwards. The pointer tends to move in clock wise direction.

In case of eddy current damping, an aluminum circular disc is fixed to the spindle. This disc is made to move in the magnetic field produced by a permanent magnet. When the disc oscillates, it cuts the magnetic flux produced by damping magnet. An electromotive force (emf) is induced in the circular disc by the Faradays law. Eddy currents are established in the disc since it has several closed paths. By Lenz’s law, the current carrying disc produced a force in a direction opposite to oscillating force. The damping force can be varied by varying the projection of the magnet over the circular disc.

Depending upon the degree of damping introduced in the moving system, the instrument can have any one of the conditions as shown in Fig 3b. In case of under damped condition, the response is oscillatory. In case of over damped condition, the response is sluggish and it rises very slowly from its zero position to final position. The instrument is in critically damped condition, when the response settles quickly without any oscillation.

The measuring instrument is not to influence the quantity which is to be measured. There are two basic characteristics of an instrument which determine the suitability and performance of the instrument for a specific measuring job. These are (i) static characteristics, and (ii) dynamic characteristics.                                                                                                                                                   Static characteristics – These characteristics of an instrument are those characteristics which do not vary with time and are normally considered to check if the given instrument is fit to be used for measurement. The static characteristics are checked by the process of calibration. These characteristics are normally considered for those instruments which are used to measure a stable process condition. These characteristics are described below.

Accuracy – Accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value. It is defined as the degree of the closeness with which instrument reading approaches the true value of the quantity being measured. It is the ability of an instrument or a measuring system to respond to a true value of a measure variable under process condition. It is the degree of exactness (closeness) of a measurement compared to the expected (desired) value. It is desirable quality in measurement. Accuracy can be expressed in three ways namely (i) point accuracy, (ii) accuracy as the percentage of scale of range, and (iii) accuracy as percentage of true value.

In practice, it is more normal to quote the inaccuracy value rather than the accuracy value for an instrument. Inaccuracy is the extent to which a reading can be wrong, and is frequently quoted as a percentage of the full-scale reading of the instrument. For this reason, it is an important system design rule that instruments are chosen such that their range is appropriate to the spread of values being measured, in order that the best possible accuracy is maintained in instrument readings. Hence, if pressures with expected values between 0 bar and 1 bar are being measured, then an instrument with a range of 0 bar to 10 bar is not to be used. The term measurement uncertainty is frequently used in place of inaccuracy.

Threshold – If the input to an instrument is gradually increased from zero, the input has to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way they specify threshold for instruments. Some give absolute values, whereas others give threshold as a percentage of full-scale readings.

Resolution – It is the least increment value of input or output which can be detected, caused or otherwise discriminated by the instrument. It is the smallest change in a measured variable to which instruments response. When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity which produces an observable change in the instrument output. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of full-scale deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions.

Precision – Precision is the degree of exactness for which an instrument is designed or intended to perform. It is a measure of consistency or repeatability of measurements, i.e., successive readings do not differ or there is the consistency of the instrument output for a given value of input. A very precise reading is not a perfect accurate reading.

Precision is a term which describes an instrument’s degree of freedom from random errors. If a large number of readings are taken of the same quantity by a high precision instrument, then the spread of readings is very small. Precision is frequently, though incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high precision instrument can have a low accuracy. Low accuracy measurements from a high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration of the instrument.

Expected value – It is the design value, i.e., the ‘most probable value’ shown by the calculations, which the instrument is expected to measure.

Repeatability – Repeatability of a measuring instrument describes the closeness of output readings when the same input is applied repetitively over a short period of time, with the same measurement conditions, same instrument and observer, same location and same conditions of use maintained throughout.

Reproducibility – Reproducibility describes the closeness of output readings for the same input when there are changes in the method of measurement, observer, measuring instrument, location, conditions of use, and time of measurement. Reproducibility of an instrument is defined as the degree of the closeness with which a given quantity can be repeatedly measured. High value of reproducibility means low value of drift. Drift is of three types namely (i) zero drift, (ii) span drift, and (iii) zonal drift. Perfect reproducibility means that the instrument has no drift.

The terms of repeatability and reproducibility hence describe the spread of output readings for the same input. This spread is referred to as repeatability, if the measurement conditions are constant and as reproducibility, if the measurement conditions vary.

Linearity – It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. The non-linearity is then defined as the maximum deviation of any of the output readings. Non-linearity is normally expressed as a percentage of full-scale reading.

Sensitivity – It is a desirable quality in the measurement. It is the ratio of the change in output (response) of the instrument to a change in the input or measured variable and is defined by delta input / delta output. All instruments have sensitivity to disturbance. All calibrations and specifications of an instrument are only valid under controlled conditions of temperature, and pressure etc. These standard ambient conditions are normally defined in the instrument specification. As variations occur in the ambient temperature or pressure etc., certain static instrument characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this change. Such environmental changes affect instruments in two main ways, known as zero drift and sensitivity drift.

Sensitivity of measurement – The sensitivity of measurement is a measure of the change in instrument output which occurs when the quantity being measured changes by a given quantity. Hence, sensitivity is the ratio. The sensitivity of measurement is hence the slope of the straight line drawn between the output reading and measured quantity.

Bias – It is a constant error which takes place in the instrument when the pointer is not starting from zero scale. It is also sometimes known as zero drift. Bias or zero drift describes the effect where the zero reading of an instrument is modified by a change in ambient conditions. This causes a constant error which exists over the full range of measurement of the instrument. Zero drift is normally removable by calibration. In the case of the bathroom scale, a thumb-wheel is normally provided which can be turned until the reading is zero with the scales unloaded, hence removing the bias. Bias is also normally found in instruments like voltmeters, which are affected by ambient temperature changes. Typical units by which such zero drift is measured are volts / deg C. This is frequently called the ‘zero drift coefficient’ related to temperature changes. If the characteristic of an instrument is sensitive to several environmental parameters, then it has several zero drift coefficients, one for each environmental parameter. A typical change in the output characteristic of a pressure gauge subject to zero drift is shown in Fig 4(a).

Fig 4 Effect of disturbances

Sensitivity drift – It is also known as scale factor drift. It defines the quantity by which the sensitivity of measurement of an instrument varies as ambient conditions change. It is quantified by ‘sensitivity drift coefficient’ which define how much drift there is for a unit change in each environmental parameter which the instrument characteristic is sensitive to. Several components within an instrument are affected by environmental fluctuations, such as temperature changes, e.g., the modulus of elasticity of a spring is temperature dependent. Fig 4(b) shows the effect of sensitivity drift which can have on the output characteristic of an instrument. Sensitivity drift is measured in units of the form (angular degree / bar) / deg C. If an instrument suffers both zero drift and sensitivity drift at the same time, then the typical modification of the output characteristic is shown in Fig 4(c).

True value -True value is error free value of the measurement variable. It is given as difference between the instrument reading and the static error. Mathematically, true value = obtained instrument reading – static error. % error = [(standard reference value – obtained reading)/standard reference value] *100.

Hysteresis – Fig 5 shows the output characteristic of an instrument which shows hysteresis. If the input measured quantity to the instrument is steadily increased from a negative value, the output reading varies in the manner shown in curve (a). If the input variable is then steadily decreased, the output varies in the manner shown in curve (b). The non-coincidence between these loading and unloading curves is known as hysteresis. Two quantities are defined, maximum input hysteresis and maximum output hysteresis. These are normally expressed as a percentage of the full-scale input or output reading respectively.

Fig 5 Instrument characteristic with hysteresis

Hysteresis is most commonly found in instruments which contain springs. It is also evident when friction forces in a system have different magnitudes depending on the direction of movement, such as in the pendulum-scale mass-measuring device. Hysteresis can also occur in instruments which contain electrical windings formed round an iron core, because of the magnetic hysteresis in the iron. This occurs in devices like the variable inductance displacement transducer, the linear variable differential transformer (LVDT), and the rotary differential transformer.

Dead zone / band / space – It is defined as the range of different input values over which there is no change in output value. It is that range of possible values for which the instrument does not give a reading even there is changes in the parameter being measured. Any instrument which shows hysteresis also displays dead space, as shown in Fig 5. However, some instruments which do not suffer from any substantial hysteresis can still show a dead space in their output characteristics. Backlash in gears is a typical cause of dead space. Backlash is normally experienced in gear-sets used to convert the translational motion to rotational motion.

Nominal value – It is the value of input and output which has been stated by the manufacturer in the user manual.

Range – Range is the difference between the maximum and minimum values for which the instrument can be used for the measurement. The instrument range is stated by the  manufacturer of the instrument.

Dynamic characteristics – These characteristics of the instrument are concerned with the measurement of quantities which vary with time. These characteristics are those which change within a period of time which is normally very short in nature. The different dynamic characteristics are described below.

Speed of response – It is the rapidity with which an instrument responds to the changes to in the measurement quantity.

Fidelity – It is the degree to which an instrument indicates the measure variable without dynamic error.

Lag – It is retardation or delay in the response which the instrument has to the changes in the measurement.

Error in measurement

Measurement is the process of comparing an unknown quantity with an accepted standard quantity. It involves connecting a measuring instrument into the system under consideration and observing the resulting response on the instrument. The measurement thus obtained is a quantitative measure of the so-called ‘true value’ (since it is very difficult to define the true value, the term ‘expected value’ is used). Any measurement is affected by several variables and hence the results rarely reflect the expected value. For example, connecting a measuring instrument into the circuit under consideration always disturbs (changes) the circuit, causing the measurement to differ from the expected value. Some factors which influence the measurements are related to the measuring instruments themselves. Other factors are related to the person using the instrument. The degree to which a measurement nears the expected value is expressed in terms of the error of measurement.

Error is the deviation or change of the value obtained from measurement from the desired standard value. Mathematically, error = obtained reading/value – standard reference value.

Error can be expressed either as absolute error or as percentage of error. Absolute error can be defined as the difference between the expected value of the variable and the measured value of the variable, or e = Yn – Xn, where, e is the absolute error, Yn is the expected value, and Xn is the measured value. Hence, % error = (absolute value/expected value) *100 = [(Yn-Xn)/Yn] *100. It is more frequently expressed as an accuracy rather than error. It is more frequently expressed as an accuracy rather than error. Hence, accuracy A = 100 % – % error.

Error is defined as the difference between the true value (expected value) of the measurand and the measured value indicated by the instrument. Error can be expressed either as absolute error or as a percentage of error. Absolute errors are defined as the difference between the expected value of the variable and the measured value of the variable. Errors are normally categorized under the three major categories namely (i) gross error, (ii) systematic error, and (iii) random error.

Gross error – This error is mainly because of the human mistakes in reading or in using instruments or errors in recording observations. Errors can also occur because of incorrect adjustments of instruments and computational mistakes. These errors cannot be treated mathematically. The complete elimination of gross errors is not possible, but one can minimize them. Some errors are easily detected while others can be elusive. One of the basic gross errors which occur frequently is the improper use of an Instrument. This error can be minimized by taking proper care in reading and recording of the measurement parameter. In general, indicating instruments change with ambient conditions to some extent when connected into a complete circuit.

Systematic error – A constant uniform deviation of the operation of an instrument is known as systematic error. It is because of the problems with instruments, environment effects, or observational errors. There are two types of systematic errors namely (i) static error, (ii) dynamic error.

The static error of a measuring instrument is the numerical difference between the true value of a quantity and its value as obtained by measurement, i.e., repeated measurement of the same quantity gives different indications. Static error occurs because of the shortcomings of, the instrument, such as defective or worn parts, or ageing or effects of the environment on the instrument. Static error is caused by limitations of the measuring device or the physical laws governing its behaviour. This error is sometimes referred to as bias. Static error influences all measurements of a quantity alike. Static errors are categorized as (i) instrument errors, (ii) environment errors, and (iii) random errors.

Instrument errors are because of the friction in the bearings of the different moving components, irregular spring tension, stretching of the spring or reduction in tension because of improper handling, over loading of the instrument, improper calibration, or faulty instruments. Instrumental errors are inherent in measuring instruments, because of their mechanical structure. Instrumental errors can be avoided by (i) selecting a suitable instrument for the particular measurement applications, (ii) applying correction factors after determining the quantity of instrumental error, and (iii) calibrating the instrument against a standard.

Environmental errors are because of the conditions which are external to the measuring device, including conditions in the area surrounding the instrument, such as the effects of change in temperature, humidity, barometric pressure, or change in magnetic or electrostatic field. Instruments can cause errors if used in these conditions. Subjecting instruments to harsh environments such high temperature, pressure, humidity, strong electrostatic or electromagnetic fields, can have detrimental effects, thereby causing error. These errors can also be avoided by (i) air conditioning, (ii) hermetically sealing of certain components in the instruments, and (iii) using magnetic shields.

Observational errors are those errors which are introduced by the observer. Two most common observational errors are probably the parallax error introduced in reading a meter scale, and error of estimation when obtaining a reading from a scale meter. These errors are caused by the habits of individual observers. For example, observers can always introduce an error by consistently holding their heads too far to the left while reading a needle and scale reading.

Dynamic error is the difference between true value of a quantity changing with and value indicated by the instrument. The dynamic error is caused by the instrument not responding fast enough to follow the changes in the measured variable.

Random error – This type of error is normally because of the accumulation of a large number of small effects and can be of real concern only in measurements needing a high degree of accuracy. The cause of such error is unknown or not determined in the ordinary process of making measurement.

Random error is an indeterminate error. This error takes place because of the causes which cannot be directly established because of random variations in the parameter or the system of measurement. Hence, there is no control over them. Their random nature causes both high and low values to average out. Multiple trials help to minimize their effects. Random errors can be analyzed statistically.

A statistical analysis of measurement data is common practice since it allows an analytical determination of the uncertainty of the final test result. The outcome of a certain measurement method can be predicted on the basis of sample data without having detailed information on all the disturbing factors. To make statistical methods and interpretations meaningful, a large number of measurements are normally required. Also, systematic errors are to be small compared with residual or random errors, since statistical treatment of data cannot remove a fixed bias contained in all the measurements. During the statistical analysis, analysis is normally done for (i) arithmetic mean, (ii) deviation from the mean, (iii) average deviation, (iv) standard deviation, (v) probability of errors consisting of normal distribution of errors, (vi) range of a variable, and (vii) probable error.

Measurement standards

Standard classifications – Electrical measurement standards are precise resistors, capacitors, inductors, voltage sources, and current sources, which can be used for comparison purposes when measuring electrical quantities. For example, resistance can be accurately measured by means of a Wheatstone bridge which uses a standard resistor. Similarly, standard capacitors and inductors can be used in bridge (or other) methods to accurately measure capacitance and inductance.

Measurement standards are classified in four levels namely (i) international standards, (ii) primary standards, (iii) secondary standards, and (iv) working standards.

International standards are defined by international agreements, and are maintained at the International Bureau of Weights and Measures in France. These are as accurate as it is scientifically possible to achieve. These can be used for comparison with primary standards, but are otherwise unavailable for any application.

Primary standards are maintained at institutions in various countries around the world, such as the National Bureau of Standards in Washington. They are also constructed for the greatest possible accuracy, and their main function is checking the accuracy of secondary standards.

Secondary standards are used in industry as references for calibrating high-accuracy equipment and components, and for verifying the accuracy of working standards. Secondary standards are periodically checked at the institutions which maintain primary standards.

Working standards are the standard resistors, capacitors, and inductors normally found in a measurement laboratory. Working standard resistors are normally constructed of manganin or a similar material, which has a very low temperature coefficient. They are normally available in resistance values ranging from 0.01 ohm-metre to 1 ohm-metre, with typical accuracies of +/- 0.01 % to +/- 0.1 %. A working standard capacitor can be air-dielectric type, or it can be constructed of silvered mica. Available capacitance values are 0.001 farad (F) to 1 F with a typical accuracy of +/- 0.02 %. Working standard inductors are available in values ranging from 100 henry (H) to 10 H with typical accuracies of +/- 0.1 %. Calibrators provide standard voltages and currents for calibrating voltmeters and ammeters.


Leave a Comment