ISO/IEC 17025: Uncertainty of Measurement Explained and Applied

The latest revision of ISO/IEC 17025 reinforces a principle that underpins all technically competent capability:

A measurement result without a quantified uncertainty is incomplete.

Uncertainty of Measurement (MU) is the quantified doubt associated with a measurement result. No matter how capable the instrument or how experienced the operator, variability is inherent. What distinguishes a competent capability is not the absence of variability but the ability to understand it, calculate it and transparently report it.


What Uncertainty of Measurement Really Means

Measurement uncertainty expresses the range within which the true value of the measurand is believed to lie, at a stated level of confidence.

It does not imply the measurement is wrong. It demonstrates that the laboratory understands the factors influencing the result and has quantified their effect.

Typical contributors include:

  • Instrument resolution and calibration uncertainty.
  • Reference standards.
  • Environmental conditions (temperature, humidity, vibration).
  • Operator technique.
  • Method repeatability and reproducibility.
  • Sampling variation.

International guides such as GUM (Guide to the Expression of Uncertainty in Measurement), emphasise that uncertainty must be evaluated because customers rely on laboratory data for critical safety, regulatory and commercial decisions. Both underestimating and overestimating uncertainty introduce risk.


The Two Fundamental Types of Uncertainty

Aligned with GUM methodology and national guidance such as NPL GPG11, uncertainty components fall into two categories:

Type A (Statistical) Derived from repeated observations and analysed using statistical methods.

Examples include:

  • Standard deviation from repeat measurements.
  • Repeatability studies.

Type B (Non-Statistical) Derived from scientific judgement and external information sources.

Examples include:

  • Calibration certificates.
  • Manufacturer specifications.
  • Environmental data.
  • Reference material certificates.
  • Historical performance data.

Both types must be expressed as standard uncertainties (1σ) before they can be combined.


How to Calculate Measurement Uncertainty: A Structured 7-Step Workflow

A practical implementation model follows seven disciplined steps.

1. Define the Measurand

Clearly define:

  • What is being measured.
  • Under what conditions.
  • Using which method.

Ambiguity at this stage propagates directly into uncertainty.

2. Identify All Significant Contributors

List every factor that could influence the result:

  • Instrument.
  • Environment.
  • Method.
  • Operator.
  • Reference standards.
  • Sampling.

Exclude only those contributors that are demonstrated to be negligible.

3. Quantify Each Contributor

Assign a numerical standard uncertainty to each source.

  • Type A: Use statistical analysis (e.g. standard deviation of repeated measurements).
  • Type B: Convert tolerances, ranges or certificate values into standard deviation equivalents.

Example: If a calibration certificate states ±0.02 mm at k = 2, then the standard uncertainty is:

Article content

4. Convert All Contributors to Standard Uncertainty (1σ)

All contributors must be expressed on the same basis, typically as standard deviations. This may require:

  • Dividing rectangular distributions by √3.
  • Dividing triangular distributions by √6.
  • Adjusting stated confidence intervals to 1σ equivalents.

5. Combine the Uncertainties

Use the Root Sum Square (RSS) method for independent contributors:

Article content

This produces the combined standard uncertainty Uc

This formula assumes contributors are independent. Where correlation exists, appropriate covariance terms must be included in the combination.

6. Apply the Coverage Factor

To express uncertainty at a higher confidence level, apply a coverage factor k:

U = k ⋅ Uc

Where typically:

  • k ≈ 2 for approximately 95% confidence
  • k ≈ 3 for approximately 99.7% confidence

The result is the expanded uncertainty (U).

7. Report the Result Transparently

A compliant report should state:

  • The measurement result.
  • The expanded uncertainty.
  • The coverage factor used (k).
  • The associated confidence level.

Example:

10.00 mm ± 0.05 mm (k = 2, approximately 95% confidence)

Transparent reporting is essential for defensibility and audit compliance.


Practical Rules for Everyday Calculations

For routine calculations, simplified propagation rules apply:

  • Addition/Subtraction: Add absolute uncertainties.
  • Multiplication/Division: Add relative (percentage) uncertainties.
  • Multiplication by a Constant: Multiply the uncertainty by the same constant.

These rules help maintain proportionality when propagating uncertainty through derived calculations.


Decision Rules, Guard Banding & Risk

Under ISO/IEC 17025, uncertainty is inseparable from decision rules. When providing statements of conformity:

  • Guard bands must reflect measurement uncertainty.
  • False acceptance and false rejection risks must be understood.
  • Customer risk exposure must be transparent.

Accreditation bodies operating scrutinise whether laboratories correctly integrate uncertainty into conformity assessment.

This is where MU becomes strategic.


A Non-Obvious Insight: Measurement Uncertainty Is a Risk Control Mechanism

Modern laboratory governance treats uncertainty as:

  • A risk management input.
  • A technical credibility indicator.
  • A protection against legal and commercial exposure.
  • A determinant of Calibration and Measurement Capability (CMC).

An underestimated MU may mask process instability. An overestimated MU may erode competitiveness.

The balance requires technical judgement, statistical rigour and disciplined review.


Common Audit Findings

Recurring non conformities include:

  • Outdated or copy paste uncertainty budgets.
  • Incomplete identification of contributors.
  • Failure to convert Type B inputs correctly to standard uncertainties.
  • Reporting expanded uncertainty without stating k or the confidence level.
  • No linkage between MU and conformity decisions.

These issues are procedural, not mathematical and entirely preventable with strong technical leadership.


Bringing It All Together

Uncertainty of Measurement is not administrative overhead. It is engineering integrity expressed numerically.

When calculated correctly and applied strategically it:

  • Reduces enterprise risk exposure.
  • Supports defensible decisions.
  • Enhances audit resilience.
  • Aligns with ISO/IEC 17025 expectations.

In a data driven industrial environment, credibility belongs to laboratories that understand not only what they measure but how certain they are about it.

For transparency; all reflections are my own and draw on years of cross-sector experience not on any single engagement, employer or client.

James Gamble

23/02/2026

Share this post

Related posts