Measurement system analysis

From CEOpedia

Measurement system analysis (MSA) is a systematic evaluation of the measurement process—including instruments, procedures, operators, and environment—to quantify the amount and sources of variation introduced by the measurement system itself (AIAG 2010, p.5)[1]. Before you can improve a process, you need to measure it. But what if the measurement itself is unreliable? What if the same part measured twice gives different results? What if different operators measuring the same part get different readings? MSA answers these questions, distinguishing between variation from the process and variation from the measurement system.

The concept is foundational to quality management and Six Sigma. The Automotive Industry Action Group (AIAG) standardized MSA methods in their MSA Reference Manual, now in its fourth edition. Before accepting any data for process control or improvement, organizations must verify that the measurement system produces trustworthy results. As the quality axiom states: "Garbage in, garbage out."

Components of measurement variation

MSA examines multiple variation sources:

Repeatability

Equipment variation. Repeatability is the variation when the same operator measures the same part multiple times using the same gauge. If results differ, the gauge or procedure introduces variation[2].

Within-operator consistency. Good repeatability means consistent results when nothing changes—same person, same part, same tool.

Reproducibility

Appraiser variation. Reproducibility is the variation when different operators measure the same part using the same gauge. If operators get different results, operator technique or training is a source of variation.

Between-operator consistency. Good reproducibility means different operators get the same answers[3].

Gage R&R

Combined variation. Gage Repeatability and Reproducibility (Gage R&R) combines equipment and operator variation to show total measurement system variation.

The key metric. Gage R&R studies are the most common MSA technique, quantifying how much of observed variation comes from the measurement system versus actual part differences.

Additional properties

MSA evaluates other characteristics:

Bias

Systematic error. Bias is the difference between the average of measurements and the true value. A gauge that consistently reads 0.001 inch high has bias.

Linearity

Bias consistency across range. Linearity examines whether bias changes across the measurement range. A gauge accurate at small measurements may be inaccurate at large measurements[4].

Stability

Time consistency. Stability is whether measurements remain consistent over time. A gauge that drifts produces stable results only briefly.

Discrimination

Resolution. The measurement system must have enough discrimination to detect meaningful differences. A scale that reads only whole pounds can't detect half-pound changes.

Conducting a Gage R&R study

The most common MSA technique:

Study design

Parts, operators, trials. A typical study uses 10 parts (spanning the process range), 3 operators, and 2-3 trials per operator per part[5].

Randomization. Parts are measured in random order to prevent memory effects.

Blinding. Operators should not know which part they're measuring to prevent bias.

Analysis

ANOVA method. Analysis of variance partitions total variation into part variation, repeatability, reproducibility, and interaction effects.

Range method. A simpler approach uses ranges across trials and operators to estimate variation components.

Interpreting results

Gage R&R percentage. The key result is Gage R&R as a percentage of total variation or tolerance:

  • Under 10%: Acceptable measurement system
  • 10-30%: May be acceptable depending on application
  • Over 30%: Measurement system needs improvement[6]

Number of distinct categories. Indicates how many process groups the measurement system can discriminate. Should be at least 5.

Improvement actions

When MSA reveals problems:

Repeatability issues

Equipment focus. If repeatability dominates, focus on the gauge—calibration, maintenance, resolution, or replacement.

Procedure refinement. Inconsistent procedures can also cause poor repeatability.

Reproducibility issues

Training. If reproducibility dominates, focus on operators—training, standardization, technique[7].

Procedure clarity. Ambiguous procedures allow different interpretations.

Prerequisites

Before conducting MSA:

Calibration. The gauge must be calibrated against known standards. Without calibration, MSA results are meaningless.

Definition clarity. The characteristic being measured must be clearly defined.

Representative parts. Study parts must span the process range to assess discrimination[8].

Industry requirements

MSA is required in many contexts:

Automotive. IATF 16949 requires MSA for all measurement systems used in production and control.

Six Sigma. MSA is a standard step in the Measure phase of DMAIC.

ISO 9001. While not specifying MSA methods, ISO 9001 requires organizations to ensure measurement validity.


Measurement system analysisrecommended articles
Quality managementSix SigmaStatistical process controlCalibration

References

Footnotes

  1. AIAG (2010), MSA Manual, p.5
  2. Montgomery D.C. (2020), Statistical Quality Control, pp.567-582
  3. ASQ (2023), MSA Training Materials
  4. AIAG (2010), MSA Manual, pp.34-48
  5. MoreSteam (2023), MSA Study Design
  6. AIAG (2010), MSA Manual, pp.78-92
  7. Montgomery D.C. (2020), Statistical Quality Control, pp.598-612
  8. ASQ (2023), MSA Prerequisites

Author: Sławomir Wawak