Measurement Systems Analysis Glossary

Reprinted with permission from the MSA Manual (DaimlerChrysler, Ford Motor Company, General Motors Supplier Quality Requirements Task Force*).

The closeness of agreement between an observed value and the accepted reference value.

Analysis of Variance
A statistical method (ANOVA) often used in designed experiments (DOE), to analyze variable data from multiple groups in order to compare means and analyze sources of variation.

Apparent Resolution
The size of the least increment on the measurement instrument is the apparent resolution. This value is typically used in literature as advertisement to classify the measurement instrument. The number of data categories can be determined by dividing the size into the expected process distribution spread (6σ).
NOTE: The number of digits displayed or reported does not always indicate the resolution of the instrument. For example, parts measured as 29.075, 29.080, 20.095, etc., are recorded as five (5) digit measurements. However, the instrument may not have a resolution of .001 but rather .005.

Appraiser Variation
The variation in average measurements on the same part (measurand) between different appraisers (operators) using the same measuring instrument and method in a stable environment. Appraiser variation (AV) is one of the common sources of measurement system variation (error) that results from differences in operator skill or technique using the same measurement system. Appraiser variation is commonly assumed to be the “reproducibility error” associated with a measurement system; this is not always true (see Reproducibility).

The difference between the observed average of measurements (trials under repeatability conditions) and a reference value; historically referred to as accuracy. Bias is evaluated and expressed at a single point within the operating range of the measurement system.

A set of operations that establish, under specified conditions, the relationship between a measuring device and a traceable standard of known reference value and uncertainty. Calibration may also include steps to detect, correlate, report, or eliminate by adjustment any discrepancy in accuracy of the measuring device being compared.

Calibration Interval
A specified amount of time or set of conditions between which the calibration parameters of a measuring device are considered valid.

An estimate of the combined variation of measurement errors (random and systematic) based on a short-term assessment of the measurement system.

Confidence Interval
The range of values expected to include (at some desired probability called a confidence level) the true value of a parameter.

Control Chart
A graph of a process characteristic, based on sample measurements in time order, used to display the behavior of a process, identify patterns of process variation, assess stability, and indicate process direction.

A collection of observations under a set of conditions that may be either variable (a quantified value and unit measure) or discrete (attribute or count data such as pass/fail, good/bad, go/nogo, etc.).

Designed Experiment
A planned study involving statistical analysis of a series tests in which purposeful changes are made to process factors, and the effects observed, in order to determine the relationship of process variables and improve the process.

Alias smallest readable unit, discrimination is the measurement resolution, scale limit, or smallest detectable unit of the measurement device and standard. It is an inherent property of gage design and reported as a unit of measurement or classification. The number of data categories is often referred to as the discrimination ratio since it describes how many classifications can be reliably distinguished given the observed process variation.

Distinct Data Categories
The number of data classifications or categories that can be reliably distinguished determined by the effective resolution of the measurement system and part variation from the observed process for a given application. See ndc

Effective resolution
The size of the data category when the total measurement system variation is considered is the effective resolution. The size is determined by the length of the confidence interval based on the measurement system variation. The number of distince categories, ndc, can be determined by dividing the size into the expected process distribution spread. For the effective resolution, a standard estimate of this ndc (a the 97% confidence level) is 1.41[PV/GRR]. (See Wheeler, 1989, for an alternate interpretation)

F ratio
A statistic representing the mathematical ratio of the between-group mean square error to the within-group mean square error for a set of data used to assess the probability of random occurance at a selected level of confidence.

Gage R&R(GRR)
An estimate of the combined variation of repeatability and reproducibility for a measurement system. The GRR variance is equal to the sum of within-system and between-system variances.

A graphical representation (bar chart) of the frequency of grouped data to provide a visual evaluation of the data distribution.

In Control
State of a process when it exhibits only random, common cause variation (as opposed to chaotic, assigneable or special cause variation). A process operating with only random variation is statistically stable.

The occurance of one event or variable has no effect on the probability that another event or variable would occur.

Independent and Identically Distributed
Commonly referred to as “iid”. A homogeneous group of data which are independent and randomly distributed in one common distribution

A combined effect or outcome resulting from two or more variables that is significant. Non-additivity between appraiser and part. Appraiser differences depend on the part being measured.

The difference in bias errors over the expected operating range of the measurement system. In other terms, linearity expresses the correlation of multiple and independent bias errors over the operating range.

Long-Term Capability
Statistical measure of the within-subgroup variation exhibited by a process over a long period of time. This differs from performance because it does not include the between-subgroup variation.

The particular quantity or subject to be measured under specified conditions; a defined set of specifications for a measurement application.

Measurement System
A collection of instruments or gauges, standards, operations, methods, fixtures, software, personnel, environment and assumptions used to quantify a unit of measurement or fix assessment to the feature characteristic being measured; the complete process used to obtain a measurement.

Measurement System Error
The combined variation due to gage bias, repeatability, reproducibility, stability and linearity.

The science of measurement.

Number of distinct categories. 1.41(PV/GRR)

The inability to make repeated measurements on the same sample or component due to the dynamic nature of the measurand.

Number of Distinct Categories
See ndc.

State of a process when it exhibits chaotic, assignable, or special cause variation. A process that is out of control is statistically unstable.

Part Variation
Related to measurement systems analysis, part variation (PV) represents the expected part-topart and time-to-time variation for a stable process,

Part-to-Part Variation
Piece-to-piece variation due to measuring different parts.

An estimate of the combined variation of measurement of measurement errors (random and systematic) based on a long-term assessment of the measurement system; includes all significant and determinable sources of variation over time.

The net effect of discrimination, sensitivity and repeatability over the operating range (size, range and time) of the measurement system. In some organizations precision is used interchangeably with repeatability. In fact, precision is most often used to describe the expected variation of repeated measurements over the range of measurement; that range may be size or time. The use of the more descriptive component terms is generally preferred over the term “precision”.

An estimate (in propostion or fraction), based on a particular distribution of collected data, describing the chance a specific event will occur. Probability estimates range between 0 (impossible event) to 1 (sure thing). Set of conditions or causes working together to produce and outcome.

Process Control
Operational state when the purpose of measurement and decision criteria apply to real-time production to assess process stability and the measurand or feature to the natural process variation; the measurement result indicates the process is either stable and “in-control or “out-of-control”.

Product Control
Operational state when the purpose of measurement and decision criteria is to assess the measurement or feature for compliance to a specification; the measurement result is either “in-tolerance” or “out-of-tolerance”.

Reference Value
A measurand value that is recognised and serves as an agreed reference or master value for comparison:
·  A theoretical or established value based on scientific principles;
·  An assigned value based on some national or international organization;
·  A consensus value based on collaborative experimental work under the auspices of a scientific or engineering group; or
·  For a specific application, an agreed upon value obtained using an accepted reference method.
A value consistent with the definition of a specific quantity and accepted, sometimes by convention, as appropriate for a given purpose.

NOTE: Other terms used synonymously with reference value
accepted reference value
accepted value
conventional value
conventional true value
assigned value
best estimate of the value
master value
master measurement

Regression Analysis
A statistical study of the relationship between two or more variables. A calculation to define the mathematical relationship between two or more variables.

The common cause, random variation resulting from successive trials under defined conditions of measurement. Often referred to as equipment variation (EV), although this is misleading. The best term for repeatability is within-system variation when the the conditions of measurement are fixed and defined – fixed part, instrument standard, method, operator, environment, and assumptions. In addition to within-equipment variation, repeatability will include all within variation from the conditions in the measurement error model.

The ability to make repeated measurements on the same sample or component where there is no significant physical change to the measurand or measurement environment.

Multiple test trials under repeatablity (identical) conditions.

The variation in the average of measurements cause by a normal condition(s) of change in the measurement process. Typically, it has been defined as the variation in average measurements of the same part (measurand) between different appraisers (operators) using the same measuring instrument and method in a stable environment. It is often true for manual instruments influenced by the skill of the operator. It is not true, however, that for measurement processes (i.e., automated systems) where the operator is not a major source of variation. For this reason, reproducibility is referred to as the average variation between-systems or between.-conditions of measurement.

May apply to measurement resolution or effective resolution. The capability of the measurement system to detect and faithfully indicate even small changes of the measured characteristic. (See also discrimination).
The capability of a measurement system is δ if there is an equal probability that the indicated value of any part which differs from a reference part by less than δ will be the same as the indicated value of the reference part. The resolution of a measurement system is impacted by the measurement instrument as well as as other sources of variation in the total measurement system>

Scatter Diagram
A X-Y plot of data to assess the relationship between two variables.

Smallest input signal that results in a detectable (discernable) output signal for a measurement device. An instrument should be at least as sensitive as its unit of discrimination. Sensitivity is determined by inherent gage design and quality, in-service maintenance, and operating condition. Sensitivity is reported in units of measurement.

Significance level
A statistical level selected to test the probability of random outcomes; aslo associated with the risk, expressed as the alpha (&#945) risk, that represents the probability of a decision error.

Refers to both statistical stability of a measurement process and measurement stability over time. Both are vital for a measurement system to be adequate for its intended purpose. Statistical stability implies a predictable, undelying measurement process operating within common cause variation (in-control). Measurement stability (alias drift) addresses the necessary conformance to the measurement standard or reference over the operating life (time) of the measurement system.

Allowable deviation from a standard or nominal value that maintatins fit, form, and function.

A parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand (VIM); the range assigned to a measurement result that describes, within a defined level of confidence, the limits expected to contain the true measurement result. Uncertainty is a quantified expression of measurement reliability.

A contiguous group of data that has one mode.

* The Supplier Quality Task Force distributes the Measurement Systems Analysis (MSA) Reference Manual through The Automotive Industry Action Group (AIAG). AIAG provides additional support to automotive suppliers such as training in using the manual and related matters. Visit the AIAG website to obtain a copy of the complete manual, learn more about this inititive and other AIAG activities and publications.

Leave a Reply

Your email address will not be published. Required fields are marked *