Hubbry Logo
Control chartControl chartMain
Open search
Control chart
Community hub
Control chart
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Control chart
Control chart
from Wikipedia
Control chart
One of the Seven basic tools of quality
First described byWalter A. Shewhart
PurposeTo determine whether a process should undergo a formal examination for quality-related problems

Control charts are graphical plots used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1)[1] The hourly status is arranged on the graph, and the occurrence of abnormalities is judged based on the presence of data that differs from the conventional trend or deviates from the control limit line. Control charts are classified into Shewhart individuals control chart (ISO 7870-2)[2] and CUSUM(CUsUM)(or cumulative sum control chart)(ISO 7870-4).[3]

Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for statistical process monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when the underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular.[citation needed]

Overview

[edit]

If analysis of the control chart indicates that the process is currently under control (i.e., is stable, with variation only coming from sources common to the process), then no corrections or changes to process control parameters are needed or desired. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the monitored process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degraded process performance.[4] A process that is stable but operating outside desired (specification) limits (e.g., scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process.[5]

The control chart is one of the seven basic tools of quality control.[6] Typically control charts are used for time-series data, also known as continuous data or variable data. Although they can also be used for data that has logical comparability (i.e. you want to compare samples that were taken all at the same time, or the performance of different individuals); however the type of chart used to do this requires consideration.[7]

History

[edit]

The control chart was invented by Walter A. Shewhart working for Bell Labs in the 1920s.[8] The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a stronger business need to reduce the frequency of failures and repairs. By 1920, the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it set forth all of the essential principles and considerations which are involved in what we know today as process quality control."[9] Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.[10]

In 1924, or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander for the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

Bonnie Small worked in an Allentown plant in the 1950s after the transistor was made. Used Shewhart's methods to improve plant performance in quality control and made up to 5000 control charts. In 1958, The Western Electric Statistical Quality Control Handbook had appeared from her writings and led to use at AT&T.[11]

Chart details

[edit]

A control chart consists of:

  • Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times (i.e., the data)
  • The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions) - or for a reference period against which change can be assessed. Similarly a median can be used instead.
  • A centre line is drawn at the value of the mean or median of the statistic
  • The standard deviation (e.g., sqrt(variance) of the mean) of the statistic is calculated using all the samples - or again for a reference period against which change can be assessed. in the case of XmR charts, strictly it is an approximation of standard deviation, the [clarification needed] does not make the assumption of homogeneity of process over time that the standard deviation makes.
  • Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which the process output is considered statistically 'unlikely' and are drawn typically at 3 standard deviations from the center line

The chart may have other optional features, including:

  • More restrictive upper and lower warning or control limits, drawn as separate lines, typically two standard deviations above and below the center line. This is regularly used when a process needs tighter controls on variability.
  • Division into zones, with the addition of rules governing frequencies of observations in each zone
  • Annotation with events of interest, as determined by the Quality Engineer in charge of the process' quality
  • Action on special causes

(n.b., there are several rule sets for detection of signal; this is just one set. The rule set should be clearly stated.)

  1. Any point outside the control limits
  2. A Run of 7 Points all above or all below the central line - Stop the production
    • Quarantine and 100% check
    • Adjust Process.
    • Check 5 Consecutive samples
    • Continue The Process.
  3. A Run of 7 Point Up or Down - Instruction as above

Chart usage

[edit]

If the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart "signaling" the presence of a special-cause requires immediate investigation.

This makes the control limits very important decision aids. The control limits provide information about the process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process design simply cannot deliver the process characteristic at the desired level.

Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.

The purpose of control charts is to allow simple detection of events that are indicative of an increase in process variability.[12] This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.

The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it is clear that the process is truly in control. Note that with three-sigma limits, common-cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally distributed processes.[13] The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)

Choice of limits

[edit]

Shewhart set 3-sigma (3-standard deviation) limits on the following basis.

Shewhart summarized the conclusions by saying:

... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.[14]

Although he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:

Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.[15]

The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman–Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past....[citation needed] He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:[citation needed]

  1. Ascribe a variation or a mistake to a special cause (assignable cause) when in fact the cause belongs to the system (common cause). (Also known as a Type I error or False Positive)
  2. Ascribe a variation or a mistake to the system (common causes) when in fact the cause was a special cause (assignable cause). (Also known as a Type II error or False Negative)

Calculation of standard deviation

[edit]

As for the calculation of control limits, the standard deviation (error) required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.

An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by the extreme observations which typify special-causes.[citation needed]

Rules for detecting signals

[edit]

The most common sets are:

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.

The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data.

Alternative bases

[edit]

In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart–Deming tradition.

Performance of control charts

[edit]

When a point falls outside the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, it is appropriate to determine if the results with the special cause are better than or worse than results from common causes alone. If worse, then that cause should be eliminated if possible. If better, it may be appropriate to intentionally retain the special cause within the system producing the results.[citation needed]

Even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. So, even an in control process plotted on a properly constructed control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.[citation needed]

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.[citation needed]

It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart, the CUSUM chart and the real-time contrasts chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.[17]

Many control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.[17]

Criticisms

[edit]

Several authors have criticised the control chart on the grounds that it violates the likelihood principle.[citation needed] However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.[citation needed]

Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties.[citation needed]

Some authors have criticized that most control charts focus on numeric data. Nowadays, process data can be much more complex, e.g. non-Gaussian, mix numerical and categorical, or be missing-valued.[17]

Types of charts

[edit]
Chart Process observation Process observations relationships Process observations type Size of shift to detect
and R chart Quality characteristic measurement within one subgroup Independent Variables Large (≥ 1.5σ)
and s chart Quality characteristic measurement within one subgroup Independent Variables Large (≥ 1.5σ)
Shewhart individuals control chart (ImR chart or XmR chart) Quality characteristic measurement for one observation Independent Variables Large (≥ 1.5σ)
Three-way chart Quality characteristic measurement within one subgroup Independent Variables Large (≥ 1.5σ)
p-chart Fraction nonconforming within one subgroup Independent Attributes Large (≥ 1.5σ)
np-chart Number nonconforming within one subgroup Independent Attributes Large (≥ 1.5σ)
c-chart Number of nonconformances within one subgroup Independent Attributes Large (≥ 1.5σ)
u-chart Nonconformances per unit within one subgroup Independent Attributes Large (≥ 1.5σ)
EWMA chart Exponentially weighted moving average of quality characteristic measurement within one subgroup Independent Attributes or variables Small (< 1.5σ)
CUSUM chart Cumulative sum of quality characteristic measurement within one subgroup Independent Attributes or variables Small (< 1.5σ)
Time series model Quality characteristic measurement within one subgroup Autocorrelated Attributes or variables N/A
Regression control chart Quality characteristic measurement within one subgroup Dependent of process control variables Variables Large (≥ 1.5σ)

Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated.[18] Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data.[19] Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population.[20] It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts or λ > 500 for u- and c-charts.

Critics of this approach argue that control charts should not be used when their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.[citation needed]

See also

[edit]

References

[edit]

Bibliography

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A control chart, also known as a Shewhart chart, is a graphical tool in (SPC) used to monitor, control, and improve process performance by plotting data points over time against predefined upper and lower control limits, with a central line representing the process average. These charts distinguish between common cause variation—random fluctuations inherent to the process—and special cause variation—unusual events signaling instability or the need for corrective action. Developed by physicist in 1924 while working at Bell Telephone Laboratories, the control chart emerged as a response to manufacturing inconsistencies observed during early telephone equipment production, marking the foundation of modern practices. Shewhart's innovation was first documented in an internal memo on May 16, 1924, where he proposed using probability-based limits (typically set at three standard deviations from the mean) to detect deviations that could indicate assignable causes of variation. This approach revolutionized industrial statistics by shifting focus from inspection to prevention, influencing subsequent methodologies like and . Control charts are categorized primarily into two types based on the nature of the : those for variables (continuous measurements, such as dimensions or weights) and those for attributes (discrete counts, such as defects or nonconformities). Common variable charts include the X-bar chart for subgroup means and the R chart for subgroup ranges, while attribute charts encompass the for proportions defective and the for total defects. Selection of the appropriate chart depends on , subgroup size, and characteristics, ensuring accurate detection of shifts, trends, or instability. Widely applied across industries including , healthcare, and services, control charts enable real-time monitoring to maintain stability, reduce waste, and enhance quality, with ongoing advancements incorporating software for automated analysis and integration with for predictive insights.

Introduction

Definition and Purpose

A control chart is a graphical tool that displays a time-sequenced plot of points from a , accompanied by a centerline representing the average and upper and lower control limits derived from statistical measures of variability, enabling the assessment of performance over time. This visualization allows practitioners to observe patterns in the and determine whether the remains stable or exhibits signals of change. The primary purpose of a control chart is to detect shifts or trends in the process mean or variability, facilitating timely interventions to prevent defects and maintain consistent quality output. Within the framework of (SPC), which employs statistical methods to monitor, control, and improve performance, control charts play a central role by distinguishing between variation—random, inherent fluctuations expected in a stable —and special cause variation arising from identifiable external factors. This differentiation supports proactive decision-making to sustain stability without overreacting to normal fluctuations. For instance, in , a control chart might track the dimensions of machined parts collected at regular intervals, alerting operators to potential issues like if points exceed the control limits, thereby ensuring product conformity and reducing waste.

Basic Components

A control chart is a graphical tool that displays points representing measurements of a characteristic plotted sequentially over time or sample number. The x-axis typically denotes time or the order of observation, while the y-axis shows the measured values, providing a visual timeline of . At the core of the chart is the centerline, which represents the average value of the process when it is in a state of statistical control. This centerline is calculated as the of the plotted data points, given by the formula xˉ=xin\bar{x} = \frac{\sum x_i}{n} where xix_i are the individual measurements and nn is the number of points. Parallel to this centerline are two horizontal lines: the upper control limit (UCL) and the lower control limit (LCL), which define the boundaries within which process variation is expected under stable conditions. The space between the UCL and LCL is often divided into zones to facilitate interpretation, with the region between the centerline and each limit further subdivided for assessing patterns in the . These components together enable the chart to monitor process stability by highlighting deviations from expected behavior. In contrast to run charts, which simply plot over time with a central line such as the but lack statistically derived limits, control charts incorporate the UCL and LCL to differentiate between normal process variation and unusual shifts.

Historical Development

Origins with Shewhart

The origins of the control chart trace back to the work of at Bell Telephone Laboratories in the early 1920s, where he sought to apply statistical methods to monitor and improve manufacturing processes. On May 16, 1924, Shewhart issued an internal memorandum to his supervisor, George D. Edwards, proposing the use of charts to plot sample averages over time as a means to distinguish between common and special causes of variation in production. This memo, often regarded as the first documented prototype of a control chart, emerged amid post-World War I challenges in the telephone industry, including increased demand for reliable equipment and persistent quality inconsistencies in manufacturing components like vacuum tubes and switches. Shewhart's early concepts centered on the idea of statistical control, where process data would be plotted against dynamically calculated limits to detect deviations signaling assignable causes of variation. A pivotal innovation was the introduction of three-sigma control limits, derived from the assumption of a for process measurements, which would encompass approximately 99.7% of observations from a stable process and flag outliers as potential issues requiring intervention. These limits provided a rational, economically grounded criterion for , balancing the costs of over-detection against the risks of undetected defects. Shewhart continued refining these ideas through the late 1920s, collaborating with colleagues at to test them on real manufacturing data. His comprehensive theoretical framework was first published in 1931 with the book Economic Control of Quality of Manufactured Product, which formalized control charts as tools for achieving economic efficiency in by integrating statistical theory with practical application. This work laid the groundwork for , emphasizing the distinction between inherent process variability and external disruptions.

Post-War Adoption and Evolution

Following , played a pivotal role in disseminating control chart methodologies internationally, particularly in . Invited by the Union of Japanese Scientists and Engineers in 1950, Deming delivered lectures on statistical quality control, emphasizing Shewhart control charts to distinguish common from special causes of variation and foster continuous process improvement. His efforts during the U.S. occupation contributed to 's post-war industrial revival, igniting a quality revolution that transformed manufacturing sectors like automotive and electronics by integrating control charts into everyday operations. This influence culminated in the establishment of the in 1951, an annual award by the Japanese Union of Scientists and Engineers to recognize excellence in quality control practices, which further institutionalized the use of control charts nationwide. In the United States, control charts gained formal traction through and standardization efforts. The U.S. Department of Defense issued MIL-STD-105A in 1950, incorporating attribute-based sampling procedures derived from statistical principles, including elements aligned with control chart monitoring for process inspection during wartime production transitions to peacetime. This standard facilitated the broader adoption of control charts in defense contracting and manufacturing, ensuring consistent quality oversight. Building on this, the and developed ANSI/ASQ Z1.4 in 1971, providing guidelines for attribute inspection sampling that complemented control chart applications in industry, promoting their use beyond contexts for ongoing process monitoring. The post-war period also saw refinements to attribute control charts, building on their initial development in the 1930s and 1940s at Bell Laboratories, where p-charts and np-charts were introduced for monitoring defect rates in production. During the 1950s and 1960s, these charts evolved through practical applications in diverse industries, with enhancements in limit calculations and sensitivity to small shifts, driven by wartime lessons and peacetime efficiency demands; for instance, adaptations for batch processes improved detection of non-conformities in high-volume manufacturing. International accelerated in the with the ISO 7870 series, offering comprehensive guidelines for control chart . First published in 1993 as a general guide (ISO 7870:1993), the series provided unified procedures for establishing limits, selecting chart types, and interpreting signals, facilitating global adoption in systems. Subsequent revisions, such as ISO 7870-1:2007, expanded on philosophical underpinnings and chart varieties, emphasizing their role in proactive process control, while ISO 7870-2:2013 specifically addressed Shewhart control charts. Recent milestones include the 2020 update to ISO 22514-3, which integrates control chart principles into machine performance studies for discrete parts, supporting modern applications like automated data collection in digital environments while referencing ISO 7870 for chart construction and validation.

Fundamental Principles

Statistical Foundations

Control charts are grounded in the principles of , particularly distribution, which underpins the determination of control limits. Walter Shewhart developed the foundational approach in 1924, establishing limits at three standard deviations (3σ) from the process mean, as this encompasses approximately 99.73% of data points in a stable process assuming normality. This empirical rule balances the risk of false alarms (Type I errors) against the detection of significant shifts, ensuring in process monitoring. The 3σ criterion was chosen not solely for probabilistic purity but for its practical effectiveness in distinguishing random fluctuations from assignable causes of variation. The application of control charts parallels hypothesis testing in statistical inference, where the null hypothesis posits a stable process under common-cause variation, and out-of-control signals represent rejection of this hypothesis in favor of special-cause variation. Each plotted point or pattern triggers a test against the null, with control limits defining the critical region (e.g., beyond 3σ corresponding to a low probability, about 0.27%, of false rejection under normality). This framework allows sequential monitoring without predefined sample sizes, adapting to ongoing data collection while controlling overall error rates through the rarity of signals in stable conditions. Rational subgrouping forms a critical sampling strategy to isolate within-subgroup variation, which primarily reflects common causes, while between-subgroup differences highlight potential special causes. Shewhart advocated forming subgroups from consecutive units produced under uniform conditions to minimize external influences and maximize sensitivity to process shifts. For instance, in variables charts, subgroups of size n (typically 4–5) are selected to estimate short-term variability accurately, ensuring control limits reflect true process capability rather than sampling artifacts. While control charts traditionally assume normality for precise probabilistic interpretation, this requirement is often relaxed due to the (CLT), which states that the distribution of sample means (or subgroup statistics) approaches normality as subgroup size increases, even if individual observations are non-normal. For small subgroups (n ≥ 2), the CLT provides approximate normality, making 3σ limits robust for averages and ranges in many practical scenarios. However, severe or outliers may still inflate false alarms, underscoring the need for data transformation or non-parametric alternatives when CLT approximations falter.

Types of Process Variation

In the framework of , process variation is categorized into two primary types: variation and special cause variation. This dichotomy, originally introduced by as "chance causes" and "assignable causes" of variation, forms the foundational principle for interpreting control charts. Later refined by into the terms "common" and "special," it distinguishes between predictable, inherent fluctuations and unpredictable, external disruptions in a manufacturing or production process. Common cause variation refers to the random, inherent fluctuations that are an intrinsic part of any stable , arising from numerous small, unavoidable factors within the system itself. These variations are predictable in aggregate, as they follow a consistent pattern over time and affect all outputs similarly, contributing to the natural "noise" in the . In a stable system, common cause variation alone indicates control, where the output remains within expected limits without external intervention, though it may still lead to defects if the variation is too wide relative to specifications. For example, gradual machine wear that causes minor, consistent shifts in product dimensions exemplifies variation, as it stems from the normal operation of the equipment. Special cause variation, in contrast, involves non-random, assignable shifts due to specific, identifiable external factors that disrupt the process stability. These variations are unpredictable and irregular, often resulting in outliers or trends that signal an unstable system requiring immediate corrective action to restore control. Unlike common causes, special causes are not inherent to the process and can be traced to particular events, making them amenable to targeted removal or . An is tool breakage during operation, which suddenly alters output quality and introduces abrupt deviations beyond normal limits. Shewhart's dichotomy underpins control chart signals, where points within limits reflect common cause variation (indicating stability and predictability), while excursions beyond limits or non-random patterns alert to special causes (demanding investigation and correction to prevent ongoing instability). This classification enables practitioners to focus improvement efforts appropriately: reducing common cause variation requires systemic changes to narrow the process spread, whereas addressing special causes involves eliminating transient anomalies to achieve and maintain stability.

Construction of Control Charts

Establishing Control Limits

Control limits in a control chart define the boundaries within which process variation is expected to occur under stable conditions, typically set symmetrically around the centerline to encompass common cause variation while flagging potential special causes. These limits are statistically derived to minimize false alarms while ensuring timely detection of process shifts. The standard approach, pioneered by , uses three standard deviations (3-sigma) from the process mean, providing a balance between sensitivity and reliability. For an individuals control chart, which monitors single measurements without subgroups, the upper control limit (UCL) and lower control limit (LCL) are calculated as follows: UCL=xˉ+3σUCL = \bar{x} + 3\sigma LCL=xˉ3σLCL = \bar{x} - 3\sigma where xˉ\bar{x} is the of the individual observations, and σ\sigma is the estimated standard deviation. This assumes σ\sigma is known or reliably estimated from baseline data, ensuring the limits reflect the inherent process variability. In subgroup-based charts, such as the X-bar and R chart for monitoring averages and ranges, control limits incorporate subgroup size to account for reduced variability in averages. The UCL and LCL for the X-bar chart are given by: UCL=xˉˉ+A2RˉUCL = \bar{\bar{x}} + A_2 \bar{R} LCL=xˉˉA2RˉLCL = \bar{\bar{x}} - A_2 \bar{R} where xˉˉ\bar{\bar{x}} is the grand average of subgroup means, Rˉ\bar{R} is the average subgroup range, and A2A_2 is a constant from standard tables that adjusts for subgroup size nn (e.g., A2=0.729A_2 = 0.729 for n=4n=4), derived as A2=3/(d2n)A_2 = 3 / (d_2 \sqrt{n})
Add your contribution
Related Hubs
User Avatar
No comments yet.