Calibration

Calibration or instrument calibration is one of the primary processes used to maintain instrument accuracy. Is the operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.

Calibration is the process of configuring an instrument to provide a result for a sample within an acceptable range. Eliminating or minimizing factors that cause inaccurate measurements is a fundamental aspect of instrumentation design.

Although the exact procedure may vary from product to product, the calibration process generally involves using the instrument to test samples of one or more known values called “calibrators.” The results are used to establish a relationship between the measurement technique used by the instrument and the known values. The process in essence “teaches” the instrument to produce results that are more accurate than those that would occur otherwise. The instrument can then provide more accurate results when samples of unknown values are tested in the normal usage of the product.

Calibrations are performed using only a few calibrators to establish the correlation at specific points within the instrument’s operating range. While it might be desirable to use a large number of calibrators to establish the calibration relationship, or “curve”, the time and labor associated with preparing and testing a large number of calibrators might outweigh the resulting level of performance.

From a practical standpoint, a tradeoff must be made between the desired level of product performance and the effort associated with accomplishing the calibration. The instrument will provide the best performance when the intermediate points provided in the manufacturer’s performance specifications are used for calibration; the specified process essentially eliminates, or “zeroes out”, the inherent instrument error at these points.

The importance of calibration

Ideally a product would produce test results that exactly match the sample value, with no error at any point within the calibrated range. This line has been labeled “Ideal Results”. However, without calibration, an actual product may produce test results different from the sample value, with a potentially large error.

Calibrating the product can improve this situation significantly. During calibration, the product is “taught” using the known values of Calibrators 1 and 2 what result it should provide. The process eliminates the errors at these two points, in effect moving the “Before Calibration” curve closer to the Ideal Results line shown by the “After Calibration” curve. The Error At Any Point has been reduced to zero at the calibration points, and the residual error at any other point within the operating range is within the manufacturer’s published linearity or accuracy specification.

Proper instrument calibration is important to prevent potential error sources from degrading the result. Several factors can occur during and after a calibration that can affect its result. Among these are:

Using the wrong calibrator values: it is important to closely follow the instructions for use during the calibration process. Disregarding the instructions and selecting the wrong calibrator values will “teach” the instrument incorrectly, and produce significant errors over the entire operating range. While many instruments have software diagnostics that alert the operator if the calibrators are tested in the incorrect order (i.e. Calibrator 2 before Calibrator 1), the instrument may accept one or more calibrators of the wrong value without detecting the operator error.

Calibrator formulation tolerance: it is important to use calibrators that are formulated to tight tolerance specifications by a reputable manufacturer. There is a tolerance associated with formulating a calibrator/control due to normal variations in the instrumentation and quality control processes. This tolerance can affect the mean value obtained when using the calibrator.

Sample preparation technique: as in the case of normal testing, good sample preparation technique is essential to obtaining the best performance from the calibration process. Conditions such as pipeting different sample volumes, allowing air bubbles in the samples, or preparing the samples too early so that evaporation occurs, can all increase the variation in the results obtained from the calibrators tested in the calibration process.

Ambient temperature effects: it is important to periodically calibrate an instrument at a temperature close to that at which it will be operated. Even when a calibration is performed properly, there are other factors that can affect the accuracy of results. Environmental factors, such as the ambient temperature, can introduce errors that may not be readily evident when testing samples with unknown values. Components, such as electronics, used in an instrument may be affected by changes in operating temperature. If an instrument is calibrated at one temperature and then operated at a significantly different temperature, the temperature-induced error can also degrade the results’ accuracy.

Lascia un commento