Hubbry Logo
search
logo

Acceptance sampling

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Acceptance sampling uses statistical sampling to determine whether to accept or reject a production lot of material. It has been a common quality control technique used in industry.

It is usually done as products leave the factory, or in some cases even within the factory. Most often a producer supplies a consumer with several items and a decision to accept or reject the items is made by determining the number of defective items in a sample from the lot. The lot is accepted if the number of defects falls below where the acceptance number or otherwise the lot is rejected.[1]

In general, acceptance sampling is employed when one or several of the following hold:[2]

  • testing is destructive;
  • the cost of 100% inspection is very high; and
  • 100% inspection takes too long.

A wide variety of acceptance sampling plans is available. For example, multiple sampling plans use more than two samples to reach a conclusion. A shorter examination period and smaller sample sizes are features of this type of plan. Although the samples are taken at random, the sampling procedure is still reliable.[3]

History

[edit]

Acceptance sampling procedures became common during World War II. Sampling plans, such as MIL-STD-105, were developed by Harold F. Dodge and others and became frequently used as standards.

More recently, quality assurance broadened the scope beyond final inspection to include all aspects of manufacturing. Broader quality management systems include methodologies such as statistical process control, HACCP, six sigma, and ISO 9000. Some use of acceptance sampling still remains.

Rationale

[edit]

Sampling provides one rational means of verification that a production lot conforms to the requirements of technical specifications. 100% inspection does not guarantee 100% compliance and is too time-consuming and costly. Rather than evaluating all items, a specified sample is taken, inspected or tested, and a decision is made about accepting or rejecting the entire production lot.

Sampling plans have known risks: an acceptable quality limit (AQL) and a rejectable quality level, such as lot tolerance percent defective (LTDP), are part of the operating characteristic curve of the sampling plan. These are primarily statistical risks and do not necessarily imply that a defective product is intentionally being made or accepted. Plans can have a known average outgoing quality limit (AOQL).

Acceptance sampling for attributes

[edit]

A single sampling plan for attributes is a statistical method by which the lot is accepted or rejected on the basis of one sample.[4] Suppose that we have a lot of sizes ; a random sample of size is selected from the lot; and an acceptance number is determined. If it is found the number of nonconforming is less than or equal to , the lot is accepted; and if the number of nonconforming is greater than , the lot is not accepted. The design of a single sampling plan requires the selection of the sample size and the acceptance number .

MIL-STD-105 was a United States defense standard that provided procedures and tables for sampling by attributes (pass or fail characteristic). MIL-STD-105E was cancelled in 1995 but is available in related documents such as ANSI/ASQ Z1.4, "Sampling Procedures and Tables for Inspection by Attributes." Several levels of inspection are provided and can be indexed to several AQLs. The sample size is specified and the basis for acceptance or rejection (number of defects) is provided. MIL-STD-1916 is currently the preferred method of sampling for all Department of Defense (DoD) contracts.

Variables sampling plan

[edit]

When a measured characteristic produces a number, other sampling plans, such as those based on MIL-STD-414, are often used. Compared with attribute sampling plans, these often use a smaller sample size for the same indexed AQL.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Acceptance sampling is a statistical quality control method used to inspect a representative sample from a lot or batch of products to decide whether the entire lot meets specified quality standards and should be accepted or rejected.[1] Developed in the early 20th century at Bell Laboratories (part of Western Electric, an AT&T subsidiary), it emerged as a practical alternative to 100% inspection, balancing inspection costs with the risk of accepting defective lots.[2] Key pioneers include Walter A. Shewhart, who laid foundational work on statistical process control in 1924 that influenced sampling techniques, and Harold F. Dodge and Harry G. Romig, who published early sampling inspection tables in 1928 and advanced the field through their 1941 tables for average outgoing quality limit (AOQL) plans.[3] The method gained widespread adoption during World War II, when the U.S. military implemented training programs and led to the formation of the American Society for Quality (ASQ) in 1946.[2] There are two primary types of acceptance sampling plans: attributes sampling, which classifies items as defective or non-defective based on qualitative criteria (e.g., presence of a visual flaw), and variables sampling, which uses quantitative measurements of product characteristics (e.g., dimensions or weight) to infer lot quality more efficiently.[1] Attributes plans, standardized in documents like ANSI/ASQ Z1.4 (formerly MIL-STD-105E), specify sample sizes and acceptance numbers for single, double, or multiple sampling schemes, often operating under an acceptable quality limit (AQL) to control the proportion of defectives.[4] Variables plans, outlined in ANSI/ASQ Z1.9 (formerly MIL-STD-414), leverage statistical inference, such as hypothesis testing on means or variances, to reduce sample sizes compared to attributes methods while providing equivalent protection against poor quality.[3] These plans are designed to minimize producer's risk (rejecting a good lot) and consumer's risk (accepting a bad lot), typically evaluated through operating characteristic (OC) curves that plot acceptance probability against lot quality levels.[5] Widely applied in manufacturing, incoming raw materials inspection, and outgoing product verification, acceptance sampling ensures cost-effective quality assurance without exhaustive testing, though it does not improve the process itself—unlike statistical process control, which focuses on ongoing monitoring.[2] Modern extensions include Bayesian approaches for incorporating prior knowledge and adaptive plans that adjust based on historical performance, reflecting ongoing research in fields like electronics and pharmaceuticals.[5] Despite its utility, critics like W. Edwards Deming argued it encourages complacency in suppliers, advocating process improvement over mere lot screening.[6]

Fundamentals

Definition and Purpose

Acceptance sampling is a statistical quality control procedure in which a random sample is selected from a lot or batch of products to evaluate whether the entire lot meets predefined quality criteria, resulting in either acceptance or rejection of the lot.[7][8] This method applies to both attribute sampling, which assesses discrete characteristics like the presence of defects, and variables sampling, which measures continuous traits such as dimensions or weights.[4] Key terminology includes the lot, defined as the aggregate batch submitted for inspection; the sample, a randomly drawn subset from the lot for examination; the acceptance number (often denoted as $ c $ or Ac), the maximum allowable number of defects or nonconformities in the sample for the lot to be accepted; and the rejection number (denoted as $ r $ or Re), the threshold of defects that triggers lot rejection.[7][8] These elements form the basis of sampling plans, which specify sample size and decision rules to ensure representative evaluation.[9] The primary purpose of acceptance sampling is to balance the costs of inspection against the risks associated with quality decisions, thereby reducing the need for resource-intensive 100% inspection while upholding acceptable quality standards.[10] It mitigates the producer's risk ($ \alpha ),theprobabilityofrejectingalotthatmeetstheacceptablequalitylevel(AQL),andtheconsumersrisk(), the probability of rejecting a lot that meets the acceptable quality level (AQL), and the *consumer's risk* ( \beta $), the probability of accepting a lot exceeding the lot tolerance percent defective (LTPD).[11][8] By serving as an intermediate approach between no inspection and full inspection, it efficiently determines lot acceptability without estimating overall quality.[9] This technique emerged as an alternative to complete inspection amid wartime production constraints during World War II, when rapid output was prioritized.[12] Operating characteristic (OC) curves illustrate the performance of these plans by plotting the probability of lot acceptance against varying defect levels, aiding in risk assessment.[13]

Key Concepts

Acceptance sampling relies on several core parameters to define quality thresholds and associated risks, ensuring that sampling plans balance efficiency with reliability in quality assurance. The Acceptable Quality Limit (AQL) represents the maximum percentage of defects that is considered tolerable for a process, serving as the baseline for designing sampling plans where lots at or below this level have a high probability of acceptance.[7] For instance, an AQL of 1% indicates that lots with 1% or fewer defects are likely to be accepted, reflecting a satisfactory process average over a series of lots.[8] In contrast, the Lot Tolerance Percent Defective (LTPD) specifies the poorest quality level in an individual lot that should be rejected with high probability, typically 90% (corresponding to a consumer's risk β of 0.10), and is expressed as a percentage defective associated with a low consumer risk.[7][14] An example is an LTPD of 4%, where lots exceeding this defect rate are expected to be rejected to protect the consumer from poor quality.[8] These quality levels are evaluated through the lens of producer's and consumer's risks, which quantify the errors inherent in sampling decisions. The producer's risk (α), or Type I error, is the probability of incorrectly rejecting a good lot at the AQL, typically set at 0.05 to ensure at least 95% acceptance for satisfactory quality.[15] The consumer's risk (β), or Type II error, is the probability of accepting a bad lot at the LTPD, commonly valued at 0.10, meaning a 10% chance of passing unacceptable quality.[14] The discrimination ratio, defined as the ratio of LTPD to AQL, measures a plan's ability to distinguish between acceptable and unacceptable quality levels; a higher ratio, such as 4:1, indicates stronger differentiation between the two thresholds. Another important metric is the Average Outgoing Quality (AOQ), which estimates the expected proportion of defects in the product shipped after sampling and any rectification of rejected lots.[7] This value helps assess the overall quality protection provided by the sampling plan, particularly when rejected lots are fully inspected and defects are replaced, resulting in an AOQ that peaks at an intermediate defect level before declining.[8] These concepts underpin the operating characteristic (OC) curves used to evaluate plan performance.[8]

Historical Development

Origins in Quality Control

The roots of acceptance sampling trace back to pre-20th century manufacturing practices, where product inspection emerged as a fundamental aspect of the emerging factory system in Great Britain during the mid-1750s, emphasizing manual checks to ensure basic conformity amid the onset of the Industrial Revolution.[16] These early efforts relied on 100% inspection by skilled craftsmen or overseers, but lacked statistical rigor, often leading to inefficiencies in large-scale production.[17] Formalization of quality control practices began in the early 1900s, with significant advancements at Bell Laboratories, where physicist Walter Shewhart developed control charts in the 1920s to monitor process variation statistically, laying the groundwork for sampling-based inspection over exhaustive checking.[2] Shewhart's work at Western Electric, a Bell Labs affiliate, shifted focus from reactive inspection to proactive statistical methods, influencing the transition toward acceptance sampling as a tool for efficient quality assurance. The catalyst for widespread adoption of acceptance sampling occurred during World War II, when statisticians Harold F. Dodge and Harry G. Romig at Bell Laboratories developed sampling plans in the 1930s and 1940s to alleviate inspection bottlenecks in high-volume munitions production, enabling faster throughput without compromising reliability.[18] These plans, initially focused on attribute inspection—classifying items as conforming or nonconforming—were designed to balance producer and consumer risks under wartime pressures.[19] In response, the U.S. military adopted Army Ordnance sampling tables in the early 1940s for ordnance inspection, standardizing procedures that supported massive wartime output.[20] Early acceptance sampling methods were primarily limited to attribute-based approaches, which provided binary outcomes but offered less precision for measuring process variability compared to later variables sampling plans that incorporated quantitative measurements.[4] This attribute focus suited the immediate needs of wartime inspection but highlighted the need for more sophisticated techniques in post-war industrial applications.[21]

Evolution and Key Contributors

Following World War II, the U.S. military formalized acceptance sampling procedures to ensure consistent quality in procurement, issuing MIL-STD-105A in 1950 as the first standardized tables for attribute sampling plans based on acceptable quality limits (AQL).[22] This standard, later revised through MIL-STD-105E in 1989 and superseded by the civilian ANSI/ASQ Z1.4 in 1991, provided single, double, and multiple sampling schemes for lot inspection.[22] Complementing this, MIL-STD-414 was released in 1957, offering variables sampling plans that estimate quality characteristics like mean and variance from sample data, with its civilian counterpart ANSI/ASQ Z1.9 following in 2003.[2] Key figures advanced these foundations significantly. Harold F. Dodge and Harry G. Romig, statisticians at Bell Laboratories, developed comprehensive single and double sampling inspection tables in their 1959 book Sampling Inspection Tables: Single and Double Sampling, which influenced military standards and emphasized practical tables for rectifying inspection. Walter A. Shewhart, also from Bell Labs, integrated acceptance sampling with his pioneering control charts from the 1920s, promoting a shift from inspection-only approaches to process control that complemented sampling for ongoing quality monitoring.[2] Eugene L. Grant and Richard S. Leavenworth expanded on these in their influential 1979 textbook Statistical Quality Control (4th edition), detailing economic considerations and broader applications of sampling plans within quality systems.[23] During the 1960s and 1980s, acceptance sampling evolved toward economic optimization and computational efficiency. Researchers introduced models to design plans minimizing total inspection costs, balancing sampling risks and quality protection, as seen in works on sequential and adaptive schemes. Computer-aided tools emerged in the late 1970s and 1980s, enabling simulation of operating characteristics and custom plan generation, reducing reliance on pre-tabulated standards. Internationally, ISO 2859 was established in 1974 (with key revisions through the 1980s, including Part 1 in 1989), harmonizing attribute sampling globally and aligning with AQL-based systems like MIL-STD-105. By the 2020s, acceptance sampling has integrated with methodologies like Six Sigma, where it supports the Measure and Analyze phases of DMAIC for lot acceptance decisions.[24] Emerging adaptive sampling approaches dynamically adjust plans based on process yield and quality loss to optimize inspection, though core statistical plans like those in ANSI/ASQ Z1.4 remain foundational.[25]

Theoretical Foundations

Statistical Rationale

Acceptance sampling provides a probabilistic foundation for inferring the quality of an entire lot based on a representative sample, allowing decisions on acceptance or rejection without inspecting every unit. This inference relies on statistical distributions that model the occurrence of defects in the sample: the hypergeometric distribution for exact calculations in finite lots sampled without replacement, the binomial distribution as an approximation when the lot size is much larger than the sample, and the Poisson distribution for cases of rare defects. In variables sampling, the normal distribution is typically used to analyze continuous measurements and estimate lot parameters like mean and variance. These distributions enable the calculation of acceptance probabilities, ensuring that sample results reliably reflect lot quality under the assumption of randomness.[8] From an economic perspective, acceptance sampling justifies partial inspection over full or no inspection by minimizing total costs in high-volume production scenarios. The total cost is formulated as the sum of inspection costs (fixed setup plus variable per-unit costs) and failure costs (penalties from accepting defective lots or rejecting good ones), with optimal plans derived to balance these through techniques like direct search optimization. This approach is particularly valuable when 100% inspection is destructive, time-consuming, or prohibitively expensive, as sampling reduces inspection efforts while maintaining quality safeguards.[26] Sampling plans incorporate risk balancing to protect both producers and consumers: the acceptable quality limit (AQL) defines the defect level at which the probability of acceptance is high (1-α, where α is the producer's risk of rejecting good lots), while the lot tolerance percent defective (LTPD) sets the threshold for low acceptance probability (β, the consumer's risk of accepting poor lots). These parameters ensure equitable protection, with plans tailored to specified α (often 0.05) and β (often 0.10) values.[9] Despite these strengths, acceptance sampling has limitations, including its reliance on random sampling and lot homogeneity for valid inferences; violations can lead to biased results. It is not suited for ongoing process improvement, as it only screens lots rather than addressing root causes of variation—for that, statistical process control tools like control charts are essential.[9]

Operating Characteristic Curves

The operating characteristic (OC) curve is a graphical representation in acceptance sampling that plots the probability of acceptance (Pa) of a lot against the proportion of defects (p) in the lot, illustrating the sampling plan's ability to discriminate between acceptable and unacceptable quality levels.[13] This curve serves as the primary tool for evaluating the performance of a sampling plan, such as a single sampling plan defined by sample size n and acceptance number c, by showing how effectively it protects both producer and consumer interests.[8] For attribute sampling plans, the OC curve is constructed using the binomial distribution, assuming the lot size is large relative to the sample size. The probability of acceptance Pa(p) is the cumulative probability that the number of defects in the sample is at most c, given by the equation:
Pa(p)=k=0c(nk)pk(1p)nk Pa(p) = \sum_{k=0}^{c} \binom{n}{k} p^k (1-p)^{n-k}
where (nk)\binom{n}{k} is the binomial coefficient.[13] For variables sampling plans, the curve is derived from the normal distribution, where measurements of a quality characteristic are assumed to follow a normal distribution with known or estimated standard deviation; Pa is calculated as the probability that the sample mean falls within acceptance limits, often using z-scores standardized by the process standard deviation and sample size.[3] Key features of the OC curve include its steepness, which measures the plan's discriminatory power—the steeper the curve, the better it distinguishes good lots (low p) from bad ones (high p).[13] The producer's risk α is defined as 1 - Pa at the acceptable quality level (AQL), typically a small value like 0.05 indicating low chance of rejecting good lots, while the consumer's risk β is Pa at the lot tolerance percent defective (LTPD), often around 0.10 to limit acceptance of poor lots.[13] In interpretation, an ideal OC curve approaches 1 for Pa at low p (accepting good lots) and 0 at high p (rejecting bad lots); for sequential sampling plans, the OC curve is complemented by an average sample number (ASN) curve, which plots the expected sample size required as a function of p to further assess efficiency.[8]

Attribute Sampling Plans

Models and Assumptions

Attribute sampling plans rely on probabilistic models to determine the likelihood of accepting or rejecting a lot based on the number of defects observed in a sample. The primary models used are the binomial distribution for scenarios where the sample size is small relative to the lot size (typically n/N < 0.10), approximating independent trials with a constant defect probability p.[9] For rare defects where the defect rate p is low and the sample size n is large, the Poisson distribution serves as an approximation to the binomial, with the parameter λ = n p representing the expected number of defects.[27] When sampling without replacement from a finite lot, the hypergeometric distribution provides the exact model, accounting for the dependency introduced by the finite population size N.[9] These models operate under several key assumptions to ensure their validity. Sampling must be random to represent the lot adequately, avoiding biases that could skew defect detection.[9] The lot is assumed to be homogeneous, meaning items share similar quality characteristics without significant variation across subgroups.[9] Defects are classified dichotomously as go/no-go attributes, such as pass/fail, without intermediate gradations.[9] Additionally, the inspection outcomes for individual items are independent, implying no interaction between sampled units that could influence results.[9] Sampling plans can be single or multiple to balance inspection effort and discrimination power. In single sampling, a fixed sample size n is drawn, and the lot is accepted if the number of defects d ≤ Ac (acceptance number) or rejected otherwise. Double sampling involves an initial sample of size n1; if d1 ≤ Ac1, the lot is accepted, if d1 > Re1 (rejection number), it is rejected, and if Ac1 < d1 ≤ Re1, a second sample of size n2 is taken, with the combined defects determining acceptance or rejection. Defects in attribute sampling are often categorized by severity to allow tailored plans: Class A (critical defects that pose safety risks or render the item unusable), Class B (major defects affecting functionality but not safety), and Class C (minor defects impacting aesthetics or minor performance). Separate acceptance criteria are applied for each class, with stricter thresholds for critical defects to minimize risk. For instance, a single sampling plan with AQL = 1%, n = 80, and Ac = 2 (using Poisson approximation) achieves a probability of acceptance of approximately 10% (rejection ≈90%) for lots with defect rate p ≈ 6.5% (LTPD with consumer's risk β=0.10), providing strong consumer protection against poor quality. Operating characteristic curves evaluate such plans by plotting acceptance probability against p, highlighting their discriminatory performance.[20]

Design and Implementation

The design of attribute sampling plans involves specifying key parameters to balance inspection costs and quality protection. Practitioners first select the Acceptable Quality Limit (AQL), which represents the maximum defect rate considered acceptable for the process average, and the Lot Tolerance Percent Defective (LTPD), the defect rate at which the lot is tolerated with low probability of acceptance. Risk levels are chosen accordingly, typically with a producer's risk (α) of 0.05 for the AQL and a consumer's risk (β) of 0.10 for the LTPD. Based on the lot size NN, an inspection level (I, II, or III, with II being standard for general use) is selected to determine the sample size code letter from Table I of the standard. This code then indexes the sample size nn and acceptance number AcA_c from the appropriate sampling table, assuming the binomial model for defect occurrences.[28][13] Implementation follows a structured procedure to ensure unbiased results. A random sample of size nn is drawn from the lot, often using random number tables or software to avoid selection bias. Each unit is inspected for defects according to predefined criteria, and the total number of defects is counted. The lot is accepted if the number of defects is at most AcA_c; otherwise, it is rejected. For ongoing production streams, switching rules adjust the inspection stringency: normal inspection shifts to tightened if two out of five consecutive lots are rejected, and reverts to normal after five consecutive acceptances under tightened conditions. Reduced inspection may apply after ten consecutive acceptances under normal inspection to reduce effort when quality is stable.[28] The primary standards guiding this process are ANSI/ASQ Z1.4-2003 (with 2008 amendment and R2018 reaffirmation), which provides detailed tables for single, double, and multiple sampling plans indexed by AQL and lot size. Its international equivalent, ISO 2859-1:1999, offers harmonized procedures for attribute inspection, ensuring global consistency in application. These standards support various schemes: single sampling requires one sample for decision-making; double sampling uses a second sample only if the first is inconclusive; and multiple sampling involves up to seven cumulative samples for finer discrimination. Curtailing inspection—stopping early if defects exceed the rejection number during sampling—can reduce time and costs, particularly in larger samples, though operating characteristic curves assume full inspection for accuracy.[28] Economic considerations focus on the Average Outgoing Quality Limit (AOQL), which bounds the worst-case outgoing quality under rectifying inspection (where rejected lots are fully inspected and defects corrected). The AOQL is computed as the maximum of the average outgoing quality (AOQ), given by AOQL=maxp[pPa(p)NnN]\text{AOQL} = \max_p \left[ p \cdot P_a(p) \cdot \frac{N - n}{N} \right], where pp is the incoming defect rate and Pa(p)P_a(p) is the probability of acceptance; this helps assess long-term quality performance and justify plan selection.[13]

Variables Sampling Plans

Approaches and Models

Variables sampling plans are designed for quality characteristics that are measured on a continuous scale, such as dimensions, weights, or concentrations, allowing for more precise statistical inference compared to discrete count-based methods.[4] These plans leverage the full information from measurements to assess lot quality, typically assuming the underlying process follows a normal distribution for the quality variable, with measurements being independent and identically distributed.[29] This normality assumption enables the use of standardized statistics to estimate conformance to specification limits, ensuring the operating characteristic (OC) curves reflect the probability of acceptance under varying quality levels.[30] The primary approaches in variables sampling distinguish between cases where the process standard deviation σ\sigma is known or unknown. When σ\sigma is known—often from historical data or process control charts—the plan employs Z-scores to standardize the sample mean relative to the target or specification limits, facilitating direct comparison against acceptance criteria without estimating variability from the current sample.[29] In contrast, when σ\sigma is unknown, it is estimated from the sample using either the sample standard deviation ss or the average range RR, providing unbiased estimates under normality to account for within-lot variability.[30] These approaches are formalized in standards like ANSI/ASQ Z1.9, which supersedes MIL-STD-414 and outlines procedures for both scenarios to control the risk of accepting poor-quality lots.[4] Within these approaches, two main forms guide the decision-making process: Form 1, which focuses primarily on the sample mean assuming variability is adequately captured, and Form 2, which explicitly incorporates estimates of both the mean and variability to assess overall conformance. Form 1 uses a single acceptability constant kk to evaluate the standardized distance from the sample mean to the specification limit, simplifying implementation when variability is stable and known.[29] Form 2, however, derives an estimate of the percent nonconforming by combining the standardized mean and variability measures, offering a more comprehensive evaluation suitable for processes where both location and dispersion affect quality.[30] Key parameters in these models include the quality index, often expressed as Q=USLLSL3σQ = \frac{\text{USL} - \text{LSL}}{3\sigma} for two-sided specifications, which quantifies the process capability in terms of allowable spread relative to the tolerance width, assuming a target mean at the midpoint.[4] Acceptance decisions hinge on the sample mean Xˉ\bar{X} and standard deviation ss (or σ\sigma), where the lot is accepted if the estimated quality index meets or exceeds a threshold tied to the acceptable quality limit (AQL). For instance, in the known σ\sigma case, the lot is rejected if Xˉμ0>kσ/n|\bar{X} - \mu_0| > k \sigma / \sqrt{n}, with μ0\mu_0 as the target mean, kk as the acceptance constant derived from AQL and lot size, and nn as the sample size; this rule ensures the sample evidence aligns with the desired producer's risk.[29] The models rely on several critical assumptions to maintain validity: the process must be normally distributed, the standard deviation (known or estimated) must be unbiased and representative of lot variability, and the lot-to-lot variability should remain stable without trends or shifts during sampling.[30] Violations, such as non-normality, can distort the OC curve and increase error risks, though robustness checks are recommended in practice.[4]

Standard Procedures

Standard procedures for implementing variables sampling plans involve selecting a random sample of size nn from the lot, computing the sample mean Xˉ\bar{X} and standard deviation ss, and comparing these statistics to the product's specification limits using predefined tables to determine acceptability.[28] The value of nn is determined from standard tables based on factors such as lot size, inspection level (general or special), and the acceptable quality limit (AQL). For known process variability, the known standard deviation σ\sigma is used (plans in Section D); for unknown variability, ss (standard deviation method, Section B) or the average range RR (range method, Section C) serves as the estimate. These procedures assume the underlying data follow a normal distribution, as established in the theoretical foundations of variables sampling.[31] The primary standards governing these procedures are MIL-STD-414 (cancelled in 1999; equivalent civilian standard ANSI/ASQ Z1.9-2003) and ISO 3951, which provide comprehensive tables for single and double sampling plans under normal, tightened, and reduced inspection.[28] In MIL-STD-414 and its civilian equivalent ANSI/ASQ Z1.9, tables specify sample sizes and acceptance constants such as kk (for Form 1) or the maximum allowable percent nonconforming MM (for Form 2), which are applied to quality indices like QL=(XˉL)/sQ_L = (\bar{X} - L)/s for the lower specification limit LL and QU=(UXˉ)/sQ_U = (U - \bar{X})/s for the upper limit UU. ISO 3951 aligns closely with these, offering equivalent plans indexed by AQL and lot size for both known and estimated variability.[32][31] Decision rules focus on estimating the percent nonconforming from the sample statistics and accepting the lot if this estimate does not exceed the AQL, with additional checks for variability to ensure the process standard deviation remains within acceptable bounds. For instance, in Form 2 plans, the estimated percent nonconforming is derived using the FF distribution to account for sampling error in ss, and the lot is accepted if it is less than or equal to MM from the tables.[31] Variability checks, such as verifying that ss or σ\sigma aligns with historical process capability, prevent acceptance of lots with excessive spread even if the mean is centered. These rules apply to both single sampling (one sample per lot) and double sampling (potential second sample if the first is inconclusive), with tables providing code letters to select appropriate nn and criteria.[28] Variables sampling plans offer advantages over attribute plans by leveraging measurable data to extract more information per unit inspected, typically requiring smaller sample sizes for equivalent protection levels—often 20-50% fewer observations—while providing insights into process mean and variability.[28] This efficiency is particularly beneficial for continuous production where measurements are feasible, reducing inspection costs without compromising discrimination between good and poor quality lots. A representative example from MIL-STD-414 illustrates these procedures: for a lot size of 26-50 units at inspection level II and AQL of 1.0% under normal inspection (Form 2, single sampling), the table specifies n=5n=5 and M=3.33%M=3.33\%. If the sample yields Xˉ=195\bar{X}=195 and s=8.8s=8.8 with an upper specification limit U=209U=209, the quality index QU=1.59Q_U=1.59 leads to an estimated percent nonconforming of 2.172%, which is below MM, resulting in lot acceptance.[31]

Advanced Topics

Multi-Stage and Continuous Sampling

Multi-stage sampling plans build upon basic single and double approaches by incorporating successive inspection stages to achieve decisions with reduced overall inspection effort. In double sampling, a second sample is drawn only if the first yields an inconclusive number of defectives, allowing early acceptance or rejection in favorable cases. Multiple sampling plans, as detailed in Dodge and Romig's tables for rectifying inspection, extend this to up to seven stages, where cumulative defectives from incremental samples are compared against stage-specific acceptance and rejection numbers. This structure minimizes the average sample number (ASN), which represents the expected total units inspected across repeated applications at given quality levels, often achieving significant reductions compared to single sampling.[33][34] Sequential sampling represents the most flexible multi-stage variant, inspecting units one at a time and deciding based on accumulating evidence without predefined total sample sizes. Dodge and Romig incorporated sequential elements into their plans for lot-by-lot inspection, continuing sampling until a clear outcome emerges. The process relies on plotting the cumulative number of defectives dd against the number of inspected units nn, with straight-line boundaries derived from desired producer and consumer risks. Sampling continues if dd falls between the lower acceptance boundary (intercept h0h_0, slope approximately p1p_1) and the upper rejection boundary (intercept h1h_1, slope approximately p0p_0), where p0p_0 and p1p_1 are quality levels under null and alternative hypotheses; acceptance occurs if the plot crosses the lower boundary, and rejection if it crosses the upper. This method draws from Wald's sequential probability ratio test (SPRT), which computes the likelihood ratio after each observation and stops at the first exceedance of thresholds AA or BB, ensuring the minimal ASN among tests with specified error probabilities. For example, in attribute inspection for defectives, the SPRT updates the ratio Λn=i=1np1xi(1p1)1xip0xi(1p0)1xi\Lambda_n = \prod_{i=1}^n \frac{p_1^{x_i} (1-p_1)^{1-x_i}}{p_0^{x_i} (1-p_0)^{1-x_i}}, accepting if Λn<A\Lambda_n < A and rejecting if Λn>B\Lambda_n > B.[35] Continuous sampling plans address ongoing production streams, shifting inspection intensity dynamically to balance quality assurance and cost. The CSP-1 plan, pioneered by Dodge, initiates with 100% inspection until ii consecutive defect-free units are observed (typically i=50i = 50), then transitions to inspecting a fixed fraction ff (e.g., f=1/10f = 1/10) of subsequent units; upon detecting a defective during sampling, it reverts to full inspection. Skip-lot sampling, an extension by Dodge, applies this logic to discrete lots, permitting the skipping of inspection for selected lots (e.g., every kk-th lot) after a sequence of consecutive acceptances under a reference single sampling plan, thus reducing scrutiny for proven high-quality suppliers. These plans target an average outgoing quality limit (AOQL) by adjusting parameters to control long-run defectives.[36][37] The primary advantages of multi-stage and continuous plans lie in their efficiency, with ASN typically lower than fixed-sample equivalents at acceptable quality levels, enabling quicker decisions via SPRT's optimality in expected sample size. For instance, sequential plans can terminate after as few as 5-10 units in clear cases, versus 50+ for single sampling. However, implementation demands sophisticated record-keeping and trained personnel due to cumulative tracking across stages, increasing administrative complexity. Additionally, these plans presuppose stable incoming quality, performing suboptimally if process drift occurs without recalibration.[33][35]

Modern Applications and Software

Acceptance sampling continues to play a vital role in contemporary manufacturing, particularly in electronics and pharmaceuticals, where it ensures compliance with stringent quality standards by evaluating representative samples from production lots. In the electronics sector, it is employed to assess the reliability of components such as circuit boards and semiconductors, helping manufacturers detect defects early and maintain high yield rates. Similarly, in pharmaceuticals, acceptance sampling supports good manufacturing practices by verifying batch uniformity and potency, reducing the risk of releasing substandard drugs to the market.[24][38][39] Within supply chains, acceptance sampling is integral to incoming inspection processes, allowing organizations to verify the quality of received materials without full examination, thereby optimizing logistics and inventory management. In the food industry, it is applied to microbial sampling to determine lot acceptability based on pathogen levels, balancing safety assurance with cost efficiency in perishable goods handling. These applications often integrate with Lean Six Sigma frameworks, where acceptance sampling complements statistical process control to drive defect reduction and process improvements, as seen in reinforcement learning approaches for sequential sampling that align with Lean principles.[40][41][42] Modern adaptations of acceptance sampling incorporate advanced technologies for greater flexibility and precision. Dynamic strategies adjust acceptable quality limits (AQL) in real-time based on historical quality data, minimizing inspection costs while maintaining reliability; machine learning enhances this by enabling adaptive sampling that predicts optimal sample sizes from performance trends. Blockchain integration supports lot traceability in sampling plans, allowing secure verification of product origins and quality histories across supply chains, particularly for adaptive plans handling variable distributions like Weibull.[43][44] Several software tools facilitate the design, analysis, and implementation of acceptance sampling plans. Minitab offers comprehensive modules for both attribute and variables sampling, including generation of operating characteristic (OC) curves to evaluate plan discrimination power. QI Macros, an Excel add-in, provides user-friendly calculators for sample size determination and plan optimization. The R package AcceptanceSampling enables statistical visualization and assessment of single, double, or multiple sampling schemes through S4 classes. Online platforms like acceptancesampling.com deliver web-based calculators for custom OC curves and average sample number plots, aiding quick plan prototyping.[45][46][47][48][49] A notable case study in the automotive industry demonstrates the use of variables sampling plans for part dimensions to meet standards like ISO/TS 16949. At a manufacturing facility producing bearing caps—critical components requiring precise dimensional tolerances—single and double acceptance sampling techniques were applied to evaluate lot quality, ensuring compliance with automotive quality management systems by measuring attributes such as width and depth against specified limits. This approach reduced inspection time while achieving defect rates below 1%, highlighting variables plans' efficiency for continuous characteristics in high-volume production.[50][51] As of 2025, trends indicate a growing integration of AI-powered 100% automated inspection in manufacturing, offering near-perfect defect detection and real-time analysis in fields like electronics and pharmaceuticals, complementing traditional acceptance sampling where full inspection is feasible. AI visual systems already achieve up to 99.97% accuracy in identifying anomalies like solder defects or contamination, enabling predictive quality control that integrates seamlessly with existing supply chains.[52][53]

References

User Avatar
No comments yet.