Recent from talks
Nothing was collected or created yet.
Correlogram
View on Wikipedia

In the analysis of data, a correlogram is a chart of correlation statistics. For example, in time series analysis, a plot of the sample autocorrelations versus (the time lags) is an autocorrelogram. If cross-correlation is plotted, the result is called a cross-correlogram.
The correlogram is a commonly used tool for checking randomness in a data set. If random, autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.
In addition, correlograms are used in the model identification stage for Box–Jenkins autoregressive moving average time series models. Autocorrelations should be near-zero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.
In multivariate analysis, correlation matrices shown as color-mapped images may also be called "correlograms" or "corrgrams".[1][2][3]
Applications
[edit]The correlogram can help provide answers to the following questions:[4]
- Are the data random?
- Is an observation related to an adjacent observation?
- Is an observation related to an observation twice-removed? (etc.)
- Is the observed time series white noise?
- Is the observed time series sinusoidal?
- Is the observed time series autoregressive?
- What is an appropriate model for the observed time series?
- Is the model
- valid and sufficient?
- Is the formula valid?
Importance
[edit]Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:
- Most standard statistical tests depend on randomness. The validity of the test conclusions is directly linked to the validity of the randomness assumption.
- Many commonly used statistical formulae depend on the randomness assumption, the most common formula being the formula for determining the standard error of the sample mean:
where s is the standard deviation of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds.
- For univariate data, the default model is
If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.
Estimation of autocorrelations
[edit]The autocorrelation coefficient at lag h is given by
where ch is the autocovariance function
and c0 is the variance function
The resulting value of rh will range between −1 and +1.
Alternate estimate
[edit]Some sources may use the following formula for the autocovariance function:
Although this definition has less bias, the (1/N) formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 49–50 in Chatfield for details.
In contrast to the definition above, this definition allows us to compute in a slightly more intuitive way. Consider the sample , where for . Then, let
We then compute the Gram matrix . Finally, is computed as the sample mean of the th diagonal of . For example, the th diagonal (the main diagonal) of has elements, and its sample mean corresponds to . The st diagonal (to the right of the main diagonal) of has elements, and its sample mean corresponds to , and so on.
Statistical inference with correlograms
[edit]

In the same graph one can draw upper and lower bounds for autocorrelation with significance level :
- with as the estimated autocorrelation at lag .
If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level of . This test is an approximate one and assumes that the time-series is Gaussian.
In the above, z1−α/2 is the quantile of the normal distribution; SE is the standard error, which can be computed by Bartlett's formula for MA(ℓ) processes:
- for
In the example plotted, we can reject the null hypothesis that there is no autocorrelation between time-points which are separated by lags up to 4. For most longer periods one cannot reject the null hypothesis of no autocorrelation.
Note that there are two distinct formulas for generating the confidence bands:
1. If the correlogram is being used to test for randomness (i.e., there is no time dependence in the data), the following formula is recommended:
where N is the sample size, z is the quantile function of the standard normal distribution and α is the significance level. In this case, the confidence bands have fixed width that depends on the sample size.
2. Correlograms are also used in the model identification stage for fitting ARIMA models. In this case, a moving average model is assumed for the data and the following confidence bands should be generated:
where k is the lag. In this case, the confidence bands increase as the lag increases.
Software
[edit]Correlograms are available in most general purpose statistical libraries.
Correlograms:
Corrgrams:
Related techniques
[edit]References
[edit]- ^ Friendly, Michael (19 August 2002). "Corrgrams: Exploratory displays for correlation matrices" (PDF). The American Statistician. 56 (4). Taylor & Francis: 316–324. doi:10.1198/000313002533. Retrieved 19 January 2014.
- ^ a b "CRAN – Package corrgram". cran.r-project.org. 29 August 2013. Retrieved 19 January 2014.
- ^ a b "Quick-R: Correlograms". statmethods.net. Retrieved 19 January 2014.
- ^ "1.3.3.1. Autocorrelation Plot". www.itl.nist.gov. Retrieved 20 August 2018.
- ^ "Visualization § Autocorrelation plot".
Further reading
[edit]- Hanke, John E.; Reitsch, Arthur G.; Wichern, Dean W. Business forecasting (7th ed.). Upper Saddle River, NJ: Prentice Hall.
- Box, G. E. P.; Jenkins, G. (1976). Time Series Analysis: Forecasting and Control. Holden-Day.
- Chatfield, C. (1989). The Analysis of Time Series: An Introduction (Fourth ed.). New York, NY: Chapman & Hall.
External links
[edit]
This article incorporates public domain material from the National Institute of Standards and Technology
Correlogram
View on GrokipediaFundamentals
Definition
A correlogram is a graphical representation of the autocorrelation coefficients of a time series or dataset at various lags, typically plotted as bars or lines against the lag values on the horizontal axis. This visualization displays the correlation between observations separated by different separations such as time lags or spatial distances, helping to reveal patterns such as periodicity or dependence in the data.[6] The mathematical foundation of a correlogram lies in the autocorrelation function, where the autocorrelation coefficient at lag is defined as assuming stationarity, and the correlogram plots these values for , with by definition. The vertical axis ranges from -1 to 1, indicating the strength and direction of linear relationships, while confidence bands—often at approximately for large samples under a white noise assumption—are commonly included to provide a reference for statistical significance, though their exact form depends on the underlying model.[6] The term "correlogram" was introduced by statistician Herman Wold in his 1938 dissertation A Study in the Analysis of Stationary Time Series[7]. Its use as a key diagnostic tool for identifying autoregressive integrated moving average (ARIMA) models was popularized by George E. P. Box and Gwilym M. Jenkins in their 1970 book Time Series Analysis: Forecasting and Control.[4]Purpose and Importance
Correlograms, which visualize the autocorrelation function (ACF) of a time series, serve primarily to detect the presence and structure of autocorrelation, thereby revealing dependencies between observations at different lags.[8] They help identify whether a series exhibits stationarity by showing if autocorrelations decay rapidly or persist, as slowly decaying values often indicate non-stationarity due to trends or other structural features.[9] Additionally, correlograms assess randomness in data; near-zero autocorrelations across lags, typically within significance bounds, suggest the series behaves like white noise, free of predictable patterns.[8] This diagnostic role extends to aiding model selection in time series analysis, where patterns in the plot guide the choice of appropriate autoregressive orders or seasonal components.[9] The importance of correlograms lies in their utility for validating key assumptions in statistical modeling, particularly by diagnosing serial correlation in model residuals, which can bias parameter estimates and invalidate inference if unaddressed.[10] In econometrics, they are indispensable for ensuring residuals approximate white noise after fitting models like regressions on economic indicators.[10] Similarly, in signal processing, correlograms provide insights into signal structure via the autocorrelation estimate, supporting spectral analysis under the Wiener-Khinchin theorem and helping mitigate issues like power leakage through windowing.[11] Their widespread adoption in fields such as environmental science underscores their value for analyzing temporal dependencies in climate or ecological data, where detecting non-random patterns informs forecasting and policy.[12] Despite their strengths, correlograms offer primarily visual intuition and should be complemented by formal statistical tests, such as Ljung-Box, to confirm the significance of observed patterns and avoid misinterpretation from sampling variability.[8] For instance, in finance, correlograms frequently reveal positive short-lag autocorrelations in stock returns, signaling persistence and non-random behavior that challenges efficient market hypotheses and influences trading strategies.Estimation Methods
Standard Autocorrelation Estimation
The standard method for estimating autocorrelations in a correlogram involves computing the sample autocorrelation coefficients for a range of lags using the observed time series data. This approach provides the empirical basis for plotting the correlogram, capturing linear dependencies between observations separated by different time intervals.[13] The sample autocorrelation coefficient at lag , denoted , is calculated as where is the sample size, are the time series observations, and is the sample mean. This formula normalizes the sample autocovariance at lag by the sample variance at lag 0, yielding coefficients between -1 and 1. Lags are typically computed from to a maximum to balance reliability and avoid excessive bias from limited overlapping observations at higher lags.[13] This estimation assumes the underlying time series is stationary, meaning its statistical properties such as mean, variance, and autocovariance structure remain constant over time. Non-stationarity can lead to spurious patterns in the estimates, distorting the correlogram.[14] The computation proceeds in three main steps: first, center the data by subtracting the overall sample mean from each observation to remove any constant level; second, for each lag , compute the autocovariance as the sum of the products of the centered values separated by periods (divided by ), using all available pairs; third, normalize each autocovariance by the lag-0 variance (the sum of squared deviations from the mean, divided by ) to obtain . This process ensures the estimates form a valid correlation sequence suitable for correlogram construction.[13] In finite samples, these estimates exhibit bias, tending to underestimate the magnitude of true autocorrelations (pulling them toward zero), particularly for larger lags relative to , due to the reduced number of terms in the numerator and the shared denominator. The bias diminishes as increases and remains small compared to , but selecting lags beyond exacerbates it, leading to less reliable higher-lag coefficients.[15]Alternative Estimates
The unbiased sample autocorrelation estimator corrects for the bias in the standard method by adjusting for the lag-dependent reduction in the number of paired observations, yielding the formula The resulting estimator is unbiased under the assumption of stationarity and helps mitigate the tendency of sample autocorrelations to decay too rapidly with increasing lag .[15] Positive definite corrections are applied to the estimated autocorrelation matrix to ensure it remains positive semi-definite, a necessary property for any valid autocorrelation function that prevents inconsistencies such as negative eigenvalues. These corrections typically involve tapering the estimated values at higher lags or applying smoothing techniques, such as kernel-based adjustments, to enforce the Toeplitz structure while preserving overall shape. Such methods are particularly useful in high-dimensional settings or when raw estimates violate mathematical constraints due to sampling variability. The prewhitening approach enhances correlogram estimation by first fitting an autoregressive (AR) model to the time series, which removes dominant low-order correlations, and then computing the autocorrelation function on the resulting residuals. This technique reduces the influence of strong serial dependence on higher-lag estimates, leading to more reliable identification of underlying patterns in the residual structure. Prewhitening is especially effective when the series exhibits significant AR components that mask subtler correlations. The standard biased estimator is generally preferred over alternatives like the unbiased one, particularly in small samples, because it ensures the estimated autocorrelation function corresponds to a valid positive semi-definite matrix. The unbiased estimator, while correcting bias, can introduce higher variability and inconsistencies (e.g., non-monotonic decay or negative values at certain lags) in small samples and is more suitable for large . Alternatives such as prewhitening or positive definite corrections are used when artifacts appear or to address specific violations of assumptions.[16]Spatial Autocorrelation Estimation
For spatial correlograms, estimation differs from time series lags and involves grouping observations by geographic distance. Common methods compute spatial autocorrelation measures, such as Moran's I or Geary's C, within distance bins or classes. For each distance , the coefficient is calculated as a normalized sum of products of deviations for pairs of points separated by approximately , often weighted by a spatial kernel or inverse distance. This reveals how similarity decreases with separation, assuming isotropy. Binning reduces noise from sparse pairs at large distances, with the number of classes chosen based on data density (e.g., 10-20 bins). Non-stationarity in spatial processes can be addressed via local indicators. These methods assume independence beyond certain scales but require detrending for trends.[3]Interpretation and Inference
Visual Interpretation
A correlogram, or autocorrelation function (ACF) plot, visually displays the sample autocorrelations of a time series at various lags, aiding in the identification of underlying patterns and structures. The plot typically shows lag 0 at autocorrelation 1, with bars or lines representing correlations decreasing as lags increase, often bounded by confidence intervals to assess significance. Key patterns in the correlogram reveal process characteristics: a slow decay in autocorrelations suggests long-memory processes where past values influence the series over extended periods, as seen in non-stationary data with trends.[17] In contrast, a rapid drop-off after lag 1 indicates an AR(1) model, where the autocorrelation decays exponentially but abruptly tails toward zero beyond the first lag.[17] Sinusoidal patterns in the correlogram, characterized by oscillating positive and negative correlations that dampen over lags, point to seasonality, with peaks at multiples of the seasonal period such as quarterly or monthly cycles.[17] These visual cues, derived from standard estimation methods like the sample ACF, help diagnose the time series structure without formal modeling. For diagnostic purposes, confidence intervals are overlaid on the plot, commonly ±1.96/√n bands for a 95% interval under the white noise null hypothesis, where n is the sample size; correlations exceeding these bands suggest non-randomness and potential dependence.[18] To interpret the correlogram effectively, examine decay rates for model order selection: gradual tailing implies autoregressive components, while abrupt cut-offs suggest moving average terms. If autocorrelations fall outside the confidence bands at multiple lags, the series likely exhibits serial correlation, rejecting the white noise assumption. Common pitfalls include ignoring the lag 0 autocorrelation, which is always 1 by definition and not indicative of structure, and overinterpreting small samples where wider bands may falsely flag insignificance or vice versa.[18]Statistical Significance Testing
Statistical significance testing in correlograms involves formal hypothesis tests to determine whether observed autocorrelations differ significantly from zero, providing quantitative validation beyond visual inspection. These tests assume the null hypothesis that the time series is white noise, meaning no autocorrelation at any lag. Under this assumption, individual sample autocorrelations at lag are asymptotically normally distributed with mean 0 and variance approximately , where is the sample size; thus, the test statistic follows a standard normal distribution . This result, known as Bartlett's formula for the standard error under independence, allows for individual lag significance tests, typically using two-sided critical values at the 95% level (approximately ) to construct confidence bands around zero. For joint significance across multiple lags, portmanteau tests assess whether autocorrelations up to lag are collectively zero. The Box-Pierce test, introduced in 1970, computes the statistic , which under the null follows a chi-squared distribution with degrees of freedom for raw series or degrees of freedom when applied to residuals from an ARMA() model, where and account for parameters already estimated. An improved variant, the Ljung-Box test from 1978, modifies this to , which provides better finite-sample performance by weighting later lags more heavily and follows under the null for ARMA residuals; this adjustment reduces bias in small samples compared to the original Box-Pierce version. Both tests are widely used to diagnose model adequacy, rejecting the null if exceeds the critical value from the chi-squared distribution at a chosen significance level (e.g., 5%). These portmanteau tests excel at detecting linear serial dependence in residuals but have limitations in power against nonlinear alternatives, such as ARCH effects or chaotic dynamics, where autocorrelations may appear insignificant despite underlying structure; for such cases, complementary tests on squared residuals are recommended.[19] Additionally, reliable inference requires sufficiently large sample sizes, with generally advised to approximate the asymptotic distributions and control Type I error rates, as smaller samples can inflate test sizes (e.g., actual rejection rates exceeding nominal 5% by up to 1.5% at ).[19]Applications
Time Series Analysis
In time series analysis, correlograms play a central role in the identification stage of ARIMA modeling, where the autocorrelation function (ACF) plot helps determine the moving average (MA) order by examining the decay pattern of autocorrelations, while the partial autocorrelation function (PACF) plot identifies the autoregressive (AR) order through a sharp cutoff after a certain lag. Specifically, for an AR process of order p, the ACF decays gradually, but the PACF cuts off after lag p; conversely, for a pure MA process of order q, the ACF cuts off after lag q, while the PACF decays gradually. This dual examination of correlograms, as outlined in the Box-Jenkins methodology, guides the selection of appropriate p, d, and q parameters for ARIMA(p,d,q) models by revealing the underlying temporal dependencies in univariate sequential data. Correlograms also serve as diagnostic tools for assessing stationarity in time series data prior to modeling. A slowly decaying ACF in the correlogram indicates non-stationarity, often due to trends or unit roots, suggesting the need for differencing (increasing the integration order d) to achieve stationarity.[20] Once a model is fitted, correlograms of the residuals are inspected pre- and post-modeling to confirm that the transformed series exhibits rapid decay to zero, verifying that no remaining autocorrelation violates model assumptions.[20] In forecasting applications, correlograms validate ARMA models by ensuring residuals resemble white noise, with insignificant autocorrelations beyond lag zero, thereby confirming the adequacy of the fitted model for reliable predictions.Spatial and Multivariate Analysis
Spatial correlograms extend the concept of autocorrelation analysis to geographic data, plotting measures such as Moran's I or Geary's C against distance lags to quantify spatial dependence in geographic information systems (GIS). Moran's I assesses global spatial autocorrelation by comparing observed values with expected values under spatial randomness, yielding values between -1 and 1 where positive values indicate clustering and negative values suggest dispersion. Geary's C, conversely, focuses on local differences, with values less than 1 indicating positive autocorrelation and greater than 1 indicating negative. These metrics are computed for successive distance classes, forming a correlogram that reveals how similarity decays with separation, aiding detection of spatial patterns in areal or point data.[21][22] In multivariate analysis, correlograms visualize pairwise correlations among multiple variables through heatmaps or scatterplot matrices, often employing Pearson's r to measure linear associations. These displays arrange correlations in a symmetric matrix, with color intensity or size encoding strength and sign, facilitating identification of variable interdependencies. Hierarchical clustering may reorder rows and columns to group highly correlated variables, enhancing pattern recognition in high-dimensional datasets. Such visualizations are particularly useful for exploratory analysis, revealing clusters of related features without assuming temporal ordering.[23][24] Applications of spatial correlograms abound in ecology, where they model species distributions by detecting autocorrelation in abundance or presence-absence data across landscapes, informing habitat suitability and biodiversity assessments. In finance, multivariate correlograms analyze asset correlations to optimize portfolios, capturing how returns covary under market conditions and aiding risk diversification strategies. Variograms serve as spatial analogs to correlograms, estimating dissimilarity (half the mean squared difference) as a function of distance rather than correlation, providing complementary insights into spatial structure for geostatistical interpolation.[25][26][27] Unlike temporal correlograms, which rely on time lags assuming stationarity, spatial versions use geographic distance for lags and accommodate non-stationarity through local indicators of spatial association, enabling analysis of irregular or anisotropic patterns in non-sequential data.[28][29]Related Techniques
Partial Autocorrelation Function
The partial autocorrelation function (PACF) at lag for a stationary time series quantifies the correlation between and after adjusting for the linear effects of the intermediate lags from 1 to . This adjustment isolates the direct dependence between the two observations, excluding indirect influences propagated through shorter lags. Unlike the autocorrelation function, which captures both direct and indirect correlations, the PACF provides a cleaner measure of the unique contribution at each lag, making it essential for model identification in autoregressive processes. Estimation of the PACF typically involves solving for the autoregressive coefficients in an AR() model fitted to the time series, using the Yule-Walker equations that link the sample autocorrelations to these parameters. For the first lag, , where is the sample autocorrelation at lag 1. Higher-order partial autocorrelations are derived recursively from the Yule-Walker system, which takes the form: where is the vector of autocorrelations up to lag , includes the AR coefficients, and is the autocorrelation matrix.[31] The Durbin-Levinson algorithm offers an efficient recursive method to compute these coefficients, avoiding the full inversion of the Toeplitz matrix in the Yule-Walker equations and achieving complexity for sequential lags.[32] In interpretation, the PACF plot for an AR() process exhibits a sharp cutoff after lag , with subsequent values near zero, indicating no further direct autocorrelation. This behavior contrasts with moving average processes, where the PACF decays gradually. The PACF is jointly examined with the autocorrelation function in the Box-Jenkins methodology to identify the orders of ARMA models, where significant spikes in the PACF suggest the autoregressive order. Sample PACF values are assessed for significance using approximate confidence bands of , where is the sample size, to distinguish meaningful direct dependencies from noise.[33]Cross-Correlogram
The cross-correlogram visualizes the cross-correlation function between two distinct time series, providing insights into their temporal interdependencies. For two jointly stationary time series and , the cross-correlation coefficient at lag is defined as , where is the cross-covariance function, and are the variances of and , respectively.[34] This normalized measure quantifies the linear association between observations of and shifted by time units. The cross-correlogram is constructed by plotting the estimated cross-correlation coefficients against a range of lags , typically from negative values (indicating leads ) to positive values (indicating leads ). For stationary series, the function exhibits symmetry such that , allowing the plot to reveal directional relationships around lag zero.[34] Sample estimates are computed using the formula , with confidence bands at approximately to identify significant lags under the null of no correlation.[34] Cross-correlograms are instrumental in detecting lead-lag dynamics, such as in Granger causality testing, where peaks at specific lags indicate that one series may contain information predictive of the other beyond its own past.[35] They also support multivariate ARIMA modeling by helping specify the order of cross-lagged terms in vector autoregressive moving average (VARMA) structures, capturing how shocks in one series propagate to another. To mitigate artifacts from serial correlation within each series, prewhitening is recommended prior to cross-correlation computation; this involves fitting univariate ARIMA models to and separately, then applying their inverse filters to produce white noise innovations, and finally calculating the cross-correlations on these filtered residuals.[36] This technique, central to the Box-Jenkins approach for transfer function identification, isolates genuine inter-series effects and prevents overestimation of dependencies.Software Tools
Programming Languages and Packages
In the R programming language, the basestats package provides the acf() function for estimating and plotting the autocorrelation function (ACF), which generates standard correlograms for univariate time series data.[37] The plot.acf() method, also in stats, visualizes these estimates as bar plots with confidence intervals, facilitating quick inspection of serial dependencies.[38] For multivariate analysis, the corrplot package offers functions like corrplot() to create heatmap-based correlograms from correlation matrices, supporting reordering and significance testing to reveal patterns in variable relationships.[39]
Python supports correlogram generation through the statsmodels library, where statsmodels.tsa.stattools.acf() computes autocorrelation estimates for time series, often paired with matplotlib.pyplot for line or bar plots of the results. In multivariate settings, seaborn.heatmap() visualizes correlation matrices as color-encoded heatmaps, allowing customization of annotations and clustering to highlight inter-variable correlations.[40]
The Julia language includes autocorrelation capabilities in the StatsBase.jl package, with the autocor() function estimating lag-specific autocorrelations for sequential data.[41] Visualization is handled by Plots.jl, which supports plotting these values via generic plot() calls, enabling flexible backends like GR or Plotly for interactive correlograms.
As of 2025, Python libraries have seen enhancements in GPU acceleration for large-scale correlogram computations; for instance, CuPy enables GPU-based autocorrelation via NumPy-compatible operations, achieving significant speedups for high-dimensional time series.[42] Additionally, integration with machine learning frameworks like TensorFlow Probability allows autocorrelation estimation through tfp.stats.auto_correlation(), supporting dynamic workflows in probabilistic models for time-varying dependencies.[43]
Specialized Software
Minitab Statistical Software includes a dedicated time series module that provides correlogram diagnostics as part of ARIMA modeling, displaying autocorrelation function (ACF) and partial autocorrelation function (PACF) plots of residuals with confidence bands representing two standard errors around zero to assess model adequacy.[44] The software offers enhanced visualization tools for correlograms, enabling interactive correlation plots to identify patterns in multivariate data efficiently, particularly useful for large datasets with up to thousands of observations across multiple variables. As of 2025, these features are available in the latest releases, such as Minitab 22.[45] MATLAB supports correlogram analysis through thexcorr function in the Signal Processing Toolbox, which computes and plots the cross-correlation sequence between two input signals, facilitating the examination of lagged relationships in time series data.[46] Additionally, the Econometrics Toolbox offers the crosscorr function for sample cross-correlation of univariate time series, including options for lag specification and statistical inference such as confidence intervals to test for significant correlations at specific lags.[47]
Golden Software's Surfer provides specialized tools for spatial correlogram analysis in geostatistics, with the Grid Correlogram operation assessing spatial patterns and correlation structures across gridded data by calculating how grid values covary at varying distances.[48] This functionality integrates with variogram modeling to incorporate anisotropy measures, allowing users to account for directional variations in spatial continuity during kriging interpolation for mapping applications like environmental or geological data visualization.[49]
SAS offers correlogram capabilities via PROC ARIMA in the ETS module, where the PLOT option generates ACF plots of residuals and cross-correlation functions for input series, aiding in model identification and diagnostic checking for time series forecasting. For multivariate analysis, PROC CORR computes Pearson correlation coefficients across multiple variables and supports plotting options through integrated ODS graphics, enabling the visualization of correlation matrices as correlograms to explore inter-variable relationships in datasets.References
- https://www.itl.nist.gov/div898/[handbook](/page/Handbook)/pmc/section4/pmc446.htm