Skip to content

Measurement Uncertainty Calculation: A Plain-English Guide

Published 9 May 2026

A measurement without an uncertainty statement is not a measurement — it is a guess with extra decimal places. Whether the value is "in tolerance" depends entirely on how confident you are that the true value lies near the reading you wrote down.

This guide walks through what measurement uncertainty is, where it comes from, and how to calculate it using the method defined in the Guide to the Expression of Uncertainty in Measurement (JCGM 100:2008) — commonly called the GUM — and applied in UK practice via UKAS M3003. It is written for UK quality managers and metrologists who need a defensible number for a certificate, an audit, or a tolerance decision under ISO 9001 Clause 7.1.5. Pair it with the interactive Measurement Uncertainty Calculator, which applies the divisors below automatically.

What Measurement Uncertainty Actually Is

Measurement uncertainty is a non-negative parameter characterising the dispersion of the quantity values that could reasonably be attributed to the measurand, based on the information used (the formal definition from VIM 2.26 / the GUM). In plain English: how wide is the band of values that could reasonably be the truth, given everything you know about the measurement.

Two distinctions matter from the start. Uncertainty is not error — error is the difference between a single measurement and the (unknowable) true value, whereas uncertainty is what you can know about how dispersed the possible values are. Uncertainty is not tolerance — tolerance is what the part is allowed to be, uncertainty is how well you can verify whether it meets that tolerance. The two interact through guard bands and decision rules; if your uncertainty is large relative to the tolerance, your pass/fail call is unreliable.

A complete measurement result has the shape:

y = 25.000 mm ± 0.004 mm (k=2, ~95% confidence)

Three pieces: the measured value, the expanded uncertainty, and the coverage factor that explains what the expanded uncertainty means. Drop any one of them and the result is incomplete.

Where Uncertainty Comes From: Type A and Type B

The GUM splits uncertainty contributions into two evaluation methods.

Type A is evaluated by statistical analysis of repeated observations. If you measure the same gauge block 10 times with the same micrometer, the standard deviation of those readings — divided by the square root of n for the mean — is a Type A standard uncertainty. It captures the random scatter the measurement actually exhibits in your hands, on your bench, today.

Type B is evaluated by any other means: the manufacturer accuracy spec, the upstream calibration certificate on your reference standard, the instrument's digital resolution, room temperature drift away from 20 °C, documented judgement about between-calibration drift.

The GUM is explicit that Type A and Type B are not a hierarchy. Once each contribution is expressed as a standard uncertainty (one standard deviation), they combine the same way. Most working calibration budgets are Type B-dominated, because routine instruments are read once or twice — not enough repetition for a meaningful Type A.

How Do You Calculate Measurement Uncertainty?

The GUM method is a five-step procedure. The interactive Measurement Uncertainty Calculator does the arithmetic; understanding the steps tells you what to type in.

Step 1 — Define the measurand and the measurement model

Write down what you are measuring (the measurand, y) and how it depends on the inputs you can quantify (x₁, x₂, ... xₙ). For a simple direct measurement — gauge block length under a micrometer — the model is y = x, where x is the indicated reading. For a derived quantity like density, the model is y = m / V, and the inputs are mass and volume.

Step 2 — Identify every input contribution

List everything that affects the result. The standard set for a typical instrument verification includes:

  • Repeatability — the scatter in repeat readings (Type A)
  • Instrument resolution — the smallest division on the readout (Type B, rectangular)
  • Instrument accuracy — the manufacturer's stated accuracy spec (Type B)
  • Reference standard uncertainty — from the upstream calibration certificate (Type B, usually expanded at k=2)
  • Reference standard drift — between calibration and use (Type B)
  • Environmental effects — temperature, humidity, pressure where they matter (Type B)
  • Operator effects — only if reproducibility studies justify a separate term

If you cannot find a contribution in the list above, the GUM does not require you to invent one. If a known effect is negligible, document it and move on.

Step 3 — Convert each contribution to a standard uncertainty

A standard uncertainty is at one standard deviation. The conversion depends on what you started with and the probability distribution that best describes it:

  • Rectangular distribution (a half-width limit, e.g. resolution or a max accuracy spec): divide by √3
  • Triangular distribution (a half-width with central tendency): divide by √6
  • U-shape distribution (a half-width with edge tendency, e.g. temperature cycling): divide by √2
  • Normal distribution at one standard deviation: use as-is
  • Expanded uncertainty from an upstream certificate at k=2: divide by 2

Type A repeatability is already a standard deviation, so it goes in as-is (or as the standard deviation of the mean, σ/√n, if you are reporting an average).

Step 4 — Combine in root-sum-square

The combined standard uncertainty u_c(y) is the root-sum-square of the scaled contributions, weighted by sensitivity coefficients c_i = ∂y / ∂x_i:

u_c(y) = sqrt( Σ ( c_i · u(x_i) )² )

For a direct measurement where the inputs are already in the same units as the result, every sensitivity coefficient is 1 and the formula reduces to the familiar root-sum-square of the standard uncertainties. For derived measurements, the sensitivity coefficients matter — they tell you how much a unit change in each input shifts the result.

Step 5 — Apply a coverage factor for the expanded uncertainty

The expanded uncertainty U is the combined standard uncertainty multiplied by a coverage factor:

U = k · u_c(y)

For approximately normal distributions with adequate effective degrees of freedom, k=2 gives roughly 95% coverage probability and is the default recommended by UKAS M3003 for accredited UK calibration certificates. ISO/IEC 17025 §7.6 requires the coverage factor to be stated explicitly — you cannot just write "U = 0.004 mm" without saying what k was. If the effective degrees of freedom are low (small samples, dominant Type A terms), the Welch–Satterthwaite formula and a t-distribution give a more accurate k; in everyday calibration practice this rarely changes the final number meaningfully.

A Worked Example: Verifying a Digital Micrometer

A quality manager wants to verify a digital micrometer at the 25 mm point against a calibrated gauge block. The contributions, all in mm:

Contribution Value Distribution Divisor Standard uncertainty
Repeatability (σ of 10 readings) 0.0015 Normal 1 0.0015
Resolution (half of 0.001) 0.0005 Rectangular √3 0.00029
Micrometer accuracy spec 0.003 Rectangular √3 0.00173
Reference gauge block (U at k=2) 0.0006 Normal 2 0.00030
Temperature deviation from 20 °C 0.0008 Rectangular √3 0.00046

Combined standard uncertainty:

u_c = sqrt(0.0015² + 0.00029² + 0.00173² + 0.00030² + 0.00046²) ≈ 0.0024 mm

Expanded uncertainty at k=2:

U = 2 × 0.0024 ≈ 0.005 mm

Reported result: 25.000 mm ± 0.005 mm (k=2, ~95% confidence).

The micrometer's own accuracy spec dominates the budget — which is typical for a working instrument verified against an upstream UKAS-accredited gauge block. The result is honest because every contribution is on the page.

Measurement Traceability: What It Adds to Uncertainty

Measurement traceability — the unbroken chain of comparisons linking your reading back to a national or international standard — is what makes an uncertainty statement meaningful. If your reference gauge block has a certificate from an unaccredited supplier with no documented chain, the 0.0006 mm uncertainty you typed into the budget is fiction.

In the UK, the chain terminates at the National Physical Laboratory (NPL). The shortest route is to use a UKAS-accredited calibration laboratory for your reference standards — UKAS accreditation against ISO/IEC 17025:2017 means the laboratory has been independently assessed for technical competence, and its certificates carry documented traceability automatically. For in-house calibrations, you demonstrate the chain yourself: hold a UKAS-accredited certificate on the reference standard, and document how it transfers to the working instrument. See the calibration certificate guide for what a properly traceable certificate must contain.

Common Mistakes in Uncertainty Calculation

  • Mixing expanded and standard uncertainties. Pulling U from a k=2 certificate and dropping it into the RSS as if it were a standard uncertainty — always divide by k first.
  • Forgetting the distribution divisor. A manufacturer's accuracy spec is a half-width limit, not a standard deviation; rectangular contributions divide by √3.
  • Double-counting resolution. If repeatability already captures digit-jump scatter, pick the larger of repeatability and resolution rather than adding both.
  • Omitting sensitivity coefficients in derived measurements. Density, ratios, concentrations — anything where input units differ from result units needs the partial derivatives applied.
  • Reporting U without k. "U = 0.005 mm" is incomplete; "U = 0.005 mm at k=2 (~95% confidence)" is complete, and ISO/IEC 17025 §7.6 requires it.
  • Treating uncertainty as a one-off. A budget is a live record — re-evaluate when the reference standard, environment, or instrument changes materially.

When You Need a Formal Budget

ISO/IEC 17025:2017 §7.6 is unambiguous: calibration laboratories must estimate measurement uncertainty for every calibration result they report, and UKAS-accredited certificates carry a documented best measurement capability (CMC) that is peer-reviewed during assessment.

ISO 9001:2015 Clause 7.1.5.2 is softer — it requires traceability where it matters for product or service conformity, which in practice means knowing the uncertainty of measurements used for tolerance decisions. Working rule: if a measurement gates a pass/fail call against a specification, you need an uncertainty estimate to apply a guard band or decision rule. If the measurement is purely informational, an approximate estimate is usually sufficient. Document the reasoning either way.

Outside accredited laboratories, the common use cases are internal calibration of working standards, instrument verifications, and sense-checking an upstream supplier's certificate. The Measurement Uncertainty Calculator handles all three: pick the contributions, choose the distribution for each, set the coverage factor, and read off the combined and expanded uncertainty. It is a working aid, not an accreditation deliverable — for a UKAS-issued certificate, your accredited provider remains responsible for the certified budget.

Sources

This guide applies to UK organisations operating under ISO 9001, ISO/IEC 17025, or UKAS-accredited calibration laboratory regimes. Specific requirements may vary by sector and certification body. This is not legal or compliance advice — consult your accredited calibration provider or assessor for budgets that will appear on a UKAS-issued certificate.

Frequently asked questions

What is measurement uncertainty in plain English?
Measurement uncertainty is a number that describes how confident you are in a measured value. It is not the error in a single reading — it is the range of values that could reasonably be assigned to the measurand given everything that affects the measurement: the instrument's resolution, the calibration of the reference standard, environmental conditions, operator effects, and statistical variation in repeat readings. A calibration result without an uncertainty statement is unverifiable, which is why ISO/IEC 17025:2017 §7.6 requires every calibration certificate to report it.
How do you calculate measurement uncertainty?
Identify every input that affects the measured value, evaluate the standard uncertainty of each one (Type A from statistics of repeat readings, Type B from manufacturer specs, calibration certificates, resolution, drift), apply a sensitivity coefficient if the input is not in the same units as the result, combine in root-sum-square to get the combined standard uncertainty u_c, then multiply by a coverage factor (usually k=2 for ~95% confidence) to get the expanded uncertainty U. The method is defined in JCGM 100:2008 (the GUM) and applied in the UK via UKAS M3003.
What is the difference between Type A and Type B uncertainty?
Type A uncertainty is evaluated by statistical analysis of repeated observations — typically the standard deviation of a series of measurements divided by the square root of the number of readings. Type B uncertainty is evaluated by any other means: manufacturer specifications, an upstream calibration certificate, instrument resolution, drift estimates, environmental influence, or documented judgement. The GUM is explicit that there is no qualitative hierarchy between Type A and Type B once both are expressed as standard uncertainties at one standard deviation — they are combined the same way.
Why is the coverage factor k=2 used?
The coverage factor expands the combined standard uncertainty into an interval expected to contain most of the values reasonably attributable to the measurand. For an approximately normal distribution with sufficient effective degrees of freedom, k=2 corresponds to roughly 95% coverage probability. UKAS M3003 recommends k=2 as the default for UK-accredited calibration certificates, and ISO/IEC 17025 §7.6 requires the coverage factor to be stated explicitly so the reader can interpret the reported interval.
Does ISO 9001 require an uncertainty budget on every measurement?
ISO 9001:2015 Clause 7.1.5.2 requires measurement traceability where it matters for product or service conformity, which in practice means knowing the uncertainty of measurements used for tolerance decisions. A formal GUM-style budget is not always mandatory under ISO 9001 alone — but if you are using a measurement to pass or fail product against a specification, you need an uncertainty estimate to apply a guard band or decision rule. ISO/IEC 17025 §7.6 is stricter: every reported calibration result must include uncertainty.

Stop tracking calibration in spreadsheets

CalProof automates calibration scheduling, certificate management, and audit reporting for UK quality managers. From £29/mo. 14-day trial. No card required.

14-day trial. No card required. Cancel any time.