Hubbry Logo
search
logo
2293253

Log-normal distribution

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
Log-normal distribution
Probability density function
Plot of the Lognormal PDF
Identical parameter but differing parameters
Cumulative distribution function
Plot of the Lognormal CDF
Notation
Parameters
  • (logarithm of location),
  • (logarithm of scale)
Support
PDF
CDF
Quantile
Mean
Median
Mode
Variance
Skewness
Excess kurtosis
Entropy
MGF defined only for numbers with a non-positive real part, see text
CF representation is asymptotically divergent, but adequate for most numerical purposes
Fisher information
Method of moments


Expected shortfall [1]

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln X has a normal distribution.[2][3] Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).

The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton.[4] The log-normal distribution has also been associated with other names, such as McAlister, Gibrat and Cobb–Douglas.[4]

A log-normal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain (sometimes called Gibrat's law). The log-normal distribution is the maximum entropy probability distribution for a random variate X—for which the mean and variance of ln X are specified.[5]

Definitions

[edit]

Generation and parameters

[edit]

Let be a standard normal variable, and let and be two real numbers, with . Then, the distribution of the random variable

is called the log-normal distribution with parameters and . These are the expected value (or mean) and standard deviation of the variable's natural logarithm, , not the expectation and standard deviation of itself.

Relation between normal and log-normal distribution. If is normally distributed, then is log-normally distributed.

This relationship is true regardless of the base of the logarithmic or exponential function: If is normally distributed, then so is for any two positive numbers . Likewise, if is log-normally distributed, then so is , where .

In order to produce a distribution with desired mean and variance , one uses and .

Alternatively, the "multiplicative" or "geometric" parameters and can be used. They have a more direct interpretation: is the median of the distribution, and is useful for determining "scatter" intervals, see below.

Probability density function

[edit]

A positive random variable is log-normally distributed (i.e., ), if the natural logarithm of is normally distributed with mean and variance :

Let and be respectively the cumulative probability distribution function and the probability density function of the standard normal distribution, then we have that[2][4] the probability density function of the log-normal distribution is given by:

Cumulative distribution function

[edit]

The cumulative distribution function is

where is the cumulative distribution function of the standard normal distribution (i.e., ).

This may also be expressed as follows:[2]

where erfc is the complementary error function.

Multivariate log-normal

[edit]

If is a multivariate normal distribution, then has a multivariate log-normal distribution.[6][7] The exponential is applied element-wise to the random vector . The mean of is

and its covariance matrix is

Since the multivariate log-normal distribution is not widely used, the rest of this entry only deals with the univariate distribution.

Characteristic function and moment generating function

[edit]

All moments of the log-normal distribution exist and

This can be derived by letting within the integral. However, the log-normal distribution is not determined by its moments.[8] This implies that it cannot have a defined moment generating function in a neighborhood of zero.[9] Indeed, the expected value is not defined for any positive value of the argument , since the defining integral diverges.

The characteristic function is defined for real values of t, but is not defined for any complex value of t that has a negative imaginary part, and hence the characteristic function is not analytic at the origin. Consequently, the characteristic function of the log-normal distribution cannot be represented as an infinite convergent series.[10] In particular, its Taylor formal series diverges:

However, a number of alternative divergent series representations have been obtained.[10][11][12][13]

A closed-form formula for the characteristic function with in the domain of convergence is not known. A relatively simple approximating formula is available in closed form, and is given by[14]

where is the Lambert W function. This approximation is derived via an asymptotic method, but it stays sharp all over the domain of convergence of .

Properties

[edit]
a. is a log-normal variable with , . is computed by transforming to the normal variable , then integrating its density over the domain defined by (blue regions), using the numerical method of ray-tracing.[15] b & c. The pdf and cdf of the function of the log-normal variable can also be computed in this way.

Probability in different domains

[edit]

The probability content of a log-normal distribution in any arbitrary domain can be computed to desired precision by first transforming the variable to normal, then numerically integrating using the ray-trace method.[15] (Matlab code)

Probabilities of functions of a log-normal variable

[edit]

Since the probability of a log-normal can be computed in any domain, this means that the cdf (and consequently pdf and inverse cdf) of any function of a log-normal variable can also be computed.[15] (Matlab code)

Geometric or multiplicative moments

[edit]

The geometric or multiplicative mean of the log-normal distribution is . It equals the median. The geometric or multiplicative standard deviation is .[16][17]

By analogy with the arithmetic statistics, one can define a geometric variance, , and a geometric coefficient of variation,[16] , has been proposed. This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of itself (see also Coefficient of variation).

Note that the geometric mean is smaller than the arithmetic mean. This is due to the AM–GM inequality and is a consequence of the logarithm being a concave function. In fact,[18]

In finance, the term is sometimes interpreted as a convexity correction. From the point of view of stochastic calculus, this is the same correction term as in Itō's lemma for geometric Brownian motion.

Arithmetic moments

[edit]

For any real or complex number n, the n-th moment of a log-normally distributed variable X is given by[4]

Specifically, the arithmetic mean, expected square, arithmetic variance, and arithmetic standard deviation of a log-normally distributed variable X are respectively given by:[2]

The arithmetic coefficient of variation is the ratio . For a log-normal distribution it is equal to[3] This estimate is sometimes referred to as the "geometric CV" (GCV),[19][20] due to its use of the geometric variance. Contrary to the arithmetic standard deviation, the arithmetic coefficient of variation is independent of the arithmetic mean.

The parameters μ and σ can be obtained, if the arithmetic mean and the arithmetic variance are known:

A probability distribution is not uniquely determined by the moments E[Xn] = e + 1/2n2σ2 for n ≥ 1. That is, there exist other distributions with the same set of moments.[4] In fact, there is a whole family of distributions with the same moments as the log-normal distribution.[citation needed]

Mode, median, quantiles

[edit]
Comparison of mean, median and mode of two log-normal distributions with different skewness.

The mode is the point of global maximum of the probability density function. In particular, by solving the equation , we get that:

Since the log-transformed variable has a normal distribution, and quantiles are preserved under monotonic transformations, the quantiles of are

where is the quantile of the standard normal distribution.

Specifically, the median of a log-normal distribution is equal to its multiplicative mean,[21]

Partial expectation

[edit]

The partial expectation of a random variable with respect to a threshold is defined as

Alternatively, by using the definition of conditional expectation, it can be written as . For a log-normal random variable, the partial expectation is given by:

where is the normal cumulative distribution function. The derivation of the formula is provided in the Talk page. The partial expectation formula has applications in insurance and economics, it is used in solving the partial differential equation leading to the Black–Scholes formula.

Conditional expectation

[edit]

The conditional expectation of a log-normal random variable —with respect to a threshold —is its partial expectation divided by the cumulative probability of being in that range:

Alternative parameterizations

[edit]

In addition to the characterization by or , here are multiple ways how the log-normal distribution can be parameterized. ProbOnto, the knowledge base and ontology of probability distributions[22][23] lists seven such forms:

Overview of parameterizations of the log-normal distributions.
  • LogNormal1(μ,σ) with mean, μ, and standard deviation, σ, both on the log-scale [24]
  • LogNormal2(μ,υ) with mean, μ, and variance, υ, both on the log-scale
  • LogNormal3(m,σ) with median, m, on the natural scale and standard deviation, σ, on the log-scale[24]
  • LogNormal4(m,cv) with median, m, and coefficient of variation, cv, both on the natural scale
  • LogNormal5(μ,τ) with mean, μ, and precision, τ, both on the log-scale[25]
  • LogNormal6(m,σg) with median, m, and geometric standard deviation, σg, both on the natural scale[26]
  • LogNormal7(μN,σN) with mean, μN, and standard deviation, σN, both on the natural scale[27]

Examples for re-parameterization

[edit]

Consider the situation when one would like to run a model using two different optimal design tools, for example PFIM[28] and PopED.[29] The former supports the LN2, the latter LN7 parameterization, respectively. Therefore, the re-parameterization is required, otherwise the two tools would produce different results.

For the transition following formulas hold and .

For the transition following formulas hold and .

All remaining re-parameterisation formulas can be found in the specification document on the project website.[30]

Multiple, reciprocal, power

[edit]
  • Multiplication by a constant: If then for
  • Reciprocal: If then
  • Power: If then for

Multiplication and division of independent, log-normal random variables

[edit]

If two independent, log-normal variables and are multiplied [divided], the product [ratio] is again log-normal, with parameters [] and , where .

More generally, if are independent, log-normally distributed variables, then

Multiplicative central limit theorem

[edit]

The geometric or multiplicative mean of independent, identically distributed, positive random variables shows, for , approximately a log-normal distribution with parameters and , assuming is finite.

In fact, the random variables do not have to be identically distributed. It is enough for the distributions of to all have finite variance and satisfy the other conditions of any of the many variants of the central limit theorem.

This is commonly known as Gibrat's law.

Heavy-tailness of the Log-Normal

[edit]

Whether a Log-Normal can be considered or not a true heavy-tail distribution is still debated. The main reason is that its variance is always finite, differently from what happen with certain Pareto distributions, for instance. However a recent study has shown how it is possible to create a Log-Normal distribution with infinite variance using Robinson Non-Standard Analysis.[31]

Other

[edit]

A set of data that arises from the log-normal distribution has a symmetric Lorenz curve (see also Lorenz asymmetry coefficient).[32]

The harmonic , geometric and arithmetic means of this distribution are related;[33] such relation is given by

Log-normal distributions are infinitely divisible,[34] but they are not stable distributions, which can be easily drawn from.[35]

[edit]
  • If is a normal distribution, then
  • If is distributed log-normally, then is a normal random variable.
  • Let be independent log-normally distributed variables with possibly varying and parameters, and . The distribution of has no closed-form expression, but can be reasonably approximated by another log-normal distribution at the right tail.[36] Its probability density function at the neighborhood of 0 has been characterized[35] and it does not resemble any log-normal distribution. A commonly used approximation due to L.F. Fenton (but previously stated by R.I. Wilkinson and mathematically justified by Marlow[37]) is obtained by matching the mean and variance of another log-normal distribution: In the case that all have the same variance parameter , these formulas simplify to

For a more accurate approximation, one can use the Monte Carlo method to estimate the cumulative distribution function, the pdf and the right tail.[38][39] The cdf and pdf of the sum of correlated log-normally distributed random variables can also be approximated by Monte Carlo simulation.[40]

  • If then is said to have a Three-parameter log-normal distribution with support .[41] , .
  • The log-normal distribution is a special case of the semi-bounded Johnson's SU-distribution.[42]
  • If with , then (Suzuki distribution).
  • A substitute for the log-normal whose integral can be expressed in terms of more elementary functions[43] can be obtained based on the logistic distribution to get an approximation for the CDF This is a log-logistic distribution.

Statistical inference

[edit]

Estimation of parameters

[edit]

Maximum likelihood estimator

[edit]

For determining the maximum likelihood estimators of the log-normal distribution parameters μ and σ, we can use the same procedure as for the normal distribution. Note that where is the density function of the normal distribution . Therefore, the log-likelihood function is

Since the first term is constant with regard to μ and σ, both logarithmic likelihood functions, and , reach their maximum with the same and . Hence, the maximum likelihood estimators are identical to those for a normal distribution for the observations ,

For finite n, the estimator for is unbiased, but the one for is biased. As for the normal distribution, an unbiased estimator for can be obtained by replacing the denominator n by n−1 in the equation for .

From this, the MLE for the expectancy of x is:[44]

Method of moments

[edit]

When the individual values are not available, but the sample's mean and standard deviation s is, then the method of moments can be used. The corresponding parameters are determined by the following formulas, obtained from solving the equations for the expectation and variance for and :[45]

Other estimators

[edit]

Other estimators also exist, such as Finney's UMVUE estimator,[46] the "Approximately Minimum Mean Squared Error Estimator", the "Approximately Unbiased Estimator" and "Minimax Estimator",[47] also "A Conditional Mean Squared Error Estimator",[48] and other variations as well.[49][50]

Interval estimates

[edit]

The most efficient way to obtain interval estimates when analyzing log-normally distributed data consists of applying the well-known methods based on the normal distribution to logarithmically transformed data and then to back-transform results if appropriate.

Prediction intervals

[edit]

A basic example is given by prediction intervals: For the normal distribution, the interval contains approximately two thirds (68%) of the probability (or of a large sample), and contain 95%. Therefore, for a log-normal distribution,

  • contains 2/3, and
  • contains 95% of the probability. Using estimated parameters, then approximately the same percentages of the data should be contained in these intervals.

Confidence interval for eμ

[edit]

Using the principle, note that a confidence interval for is , where is the standard error and q is the 97.5% quantile of a t distribution with n-1 degrees of freedom. Back-transformation leads to a confidence interval for (the median), is: with

Confidence interval for E(X)

[edit]

The literature discusses several options for calculating the confidence interval for (the mean of the log-normal distribution). These include bootstrap as well as various other methods.[51][52]

The Cox Method[a] proposes to plug-in the estimators

and use them to construct approximate confidence intervals in the following way:

[Proof]

We know that . Also, is a normal distribution with parameters:

has a chi-squared distribution, which is approximately normally distributed (via CLT), with parameters: . Hence, .

Since the sample mean and variance are independent, and the sum of normally distributed variables is also normal, we get that: Based on the above, standard confidence intervals for can be constructed (using a Pivotal quantity) as: And since confidence intervals are preserved for monotonic transformations, we get that:

As desired.

Olsson 2005, proposed a "modified Cox method" by replacing with , which seemed to provide better coverage results for small sample sizes.[51]: Section 3.4 

Confidence interval for comparing two log normals

[edit]

Comparing two log-normal distributions can often be of interest, for example, from a treatment and control group (e.g., in an A/B test). We have samples from two independent log-normal distributions with parameters and , with sample sizes and respectively.

Comparing the medians of the two can easily be done by taking the log from each and then constructing straightforward confidence intervals and transforming it back to the exponential scale.

These CI are what's often used in epidemiology for calculation the CI for relative-risk and odds-ratio.[55] The way it is done there is that we have two approximately Normal distributions (e.g., p1 and p2, for RR), and we wish to calculate their ratio.[b]

However, the ratio of the expectations (means) of the two samples might also be of interest, while requiring more work to develop. The ratio of their means is:

Plugin in the estimators to each of these parameters yields also a log normal distribution, which means that the Cox Method, discussed above, could similarly be used for this use-case:

[Proof]

To construct a confidence interval for this ratio, we first note that follows a normal distribution, and that both and has a chi-squared distribution, which is approximately normally distributed (via CLT, with the relevant parameters).

This means that

Based on the above, standard confidence intervals can be constructed (using a Pivotal quantity) as: And since confidence intervals are preserved for monotonic transformations, we get that:

As desired.

It's worth noting that naively using the MLE in the ratio of the two expectations to create a ratio estimator will lead to a consistent, yet biased, point-estimation (we use the fact that the estimator of the ratio is a log normal distribution):[c][citation needed]

Extremal principle of entropy to fix the free parameter σ

[edit]

In applications, is a parameter to be determined. For growing processes balanced by production and dissipation, the use of an extremal principle of Shannon entropy shows that[56]

This value can then be used to give some scaling relation between the inflexion point and maximum point of the log-normal distribution.[56] This relationship is determined by the base of natural logarithm, , and exhibits some geometrical similarity to the minimal surface energy principle. These scaling relations are useful for predicting a number of growth processes (epidemic spreading, droplet splashing, population growth, swirling rate of the bathtub vortex, distribution of language characters, velocity profile of turbulences, etc.). For example, the log-normal function with such fits well with the size of secondarily produced droplets during droplet impact [57] and the spreading of an epidemic disease.[58]

The value is used to provide a probabilistic solution for the Drake equation.[59]

Occurrence and applications

[edit]

The log-normal distribution is important in the description of natural phenomena. Many natural growth processes are driven by the accumulation of many small percentage changes which become additive on a log scale. Under appropriate regularity conditions, the distribution of the resulting accumulated changes will be increasingly well approximated by a log-normal, as noted in the section above on "Multiplicative Central Limit Theorem". This is also known as Gibrat's law, after Robert Gibrat (1904–1980) who formulated it for companies.[60] If the rate of accumulation of these small changes does not vary over time, growth becomes independent of size. Even if this assumption is not true, the size distributions at any age of things that grow over time tends to be log-normal.[citation needed] Consequently, reference ranges for measurements in healthy individuals are more accurately estimated by assuming a log-normal distribution than by assuming a symmetric distribution about the mean.[citation needed]

A second justification is based on the observation that fundamental natural laws imply multiplications and divisions of positive variables. Examples are the simple gravitation law connecting masses and distance with the resulting force, or the formula for equilibrium concentrations of chemicals in a solution that connects concentrations of educts and products. Assuming log-normal distributions of the variables involved leads to consistent models in these cases.

Specific examples are given in the following subsections.[61] contains a review and table of log-normal distributions from geology, biology, medicine, food, ecology, and other areas.[62] is a review article on log-normal distributions in neuroscience, with annotated bibliography.

Human behavior

[edit]
  • The length of comments posted in Internet discussion forums follows a log-normal distribution.[63]
  • Users' dwell time on online articles (jokes, news etc.) follows a log-normal distribution.[64]
  • The length of chess games tends to follow a log-normal distribution.[65]
  • Onset durations of acoustic comparison stimuli that are matched to a standard stimulus follow a log-normal distribution.[18]

Biology and medicine

[edit]
  • Measures of size of living tissue (length, skin area, weight).[66]
  • Incubation period of diseases.[67]
  • Diameters of banana leaf spots, powdery mildew on barley.[61]
  • For highly communicable epidemics, such as SARS in 2003, if public intervention control policies are involved, the number of hospitalized cases is shown to satisfy the log-normal distribution with no free parameters if an entropy is assumed and the standard deviation is determined by the principle of maximum rate of entropy production.[68]
  • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth.[citation needed]
  • The normalised RNA-Seq readcount for any genomic region can be well approximated by log-normal distribution.
  • The PacBio sequencing read length follows a log-normal distribution.[69]
  • Certain physiological measurements, such as blood pressure of adult humans (after separation on male/female subpopulations).[70]
  • Several pharmacokinetic variables, such as Cmax, elimination half-life and the elimination rate constant.[71]
  • In neuroscience, the distribution of firing rates across a population of neurons is often approximately log-normal. This has been first observed in the cortex and striatum [72] and later in hippocampus and entorhinal cortex,[73] and elsewhere in the brain.[62][74] Also, intrinsic gain distributions and synaptic weight distributions appear to be log-normal[75] as well.
  • Neuron densities in the cerebral cortex, due to the noisy cell division process during neurodevelopment.[76]
  • In operating-rooms management, the distribution of surgery duration.
  • In the size of avalanches of fractures in the cytoskeleton of living cells, showing log-normal distributions, with significantly higher size in cancer cells than healthy ones.[77]

Chemistry

[edit]
Fitted cumulative log-normal distribution to annually maximum 1-day rainfalls, see distribution fitting

Physical sciences

[edit]
  • In hydrology, the log-normal distribution is used to analyze extreme values of such variables as monthly and annual maximum values of daily rainfall and river discharge volumes.[79]
  • In physical oceanography, the sizes of icebergs in the midwinter Southern Atlantic Ocean were found to follow a log-normal size distribution. The iceberg sizes, measured visually and by radar from the F.S. Polarstern in 1986, were thought to be controlled by wave action in heavy seas causing them to flex and break.[81]
  • In atmospheric science, log-normal distributions (or distributions made by combining multiple log-normal functions) have been used to characterize both measurements and models of the sizes and concentrations of many different types of particles, from volcanic ash, to clouds and rain, to airborne microbes.[82][83][84][85] The log-normal distribution is strictly empirical, so more physically-based distributions have been adopted to better understand processes controlling size distributions of particles such as volcanic ash.[86]

Social sciences and demographics

[edit]
  • In economics, there is evidence that the income of 97–99% of the population is distributed log-normally.[87] (The distribution of higher-income individuals follows a Pareto distribution).[88]
  • If an income distribution follows a log-normal distribution with standard deviation , then the Gini coefficient, commonly used to evaluate income inequality, can be computed as where is the error function, since , where is the cumulative distribution function of a standard normal distribution.
  • In finance, in particular the Black–Scholes model, changes in the logarithm of exchange rates, price indices, and stock market indices are assumed normal[89] (these variables behave like compound interest, not like simple interest, and so are multiplicative). However, some mathematicians such as Benoit Mandelbrot have argued [90] that log-Lévy distributions, which possess heavy tails, would be a more appropriate model, in particular for the analysis for stock market crashes. Indeed, stock price distributions typically exhibit a fat tail.[91] The fat tailed distribution of changes during stock market crashes invalidate the assumptions of the central limit theorem.
  • In scientometrics, the number of citations to journal articles and patents follows a discrete log-normal distribution.[92][93]
  • City sizes (population) satisfy Gibrat's Law.[94] The growth process of city sizes is proportionate and invariant with respect to size. From the central limit theorem therefore, the log of city size is normally distributed.
  • The number of sexual partners appears to be best described by a log-normal distribution.[95]

Technology

[edit]
  • In reliability analysis, the log-normal distribution is often used to model times to repair a maintainable system.[96]
  • In wireless communication, "the local-mean power expressed in logarithmic values, such as dB or neper, has a normal (i.e., Gaussian) distribution."[97] Also, the random obstruction of radio signals due to large buildings and hills, called shadowing, is often modeled as a log-normal distribution.
  • Particle size distributions produced by comminution with random impacts, such as in ball milling.[98]
  • The file size distribution of publicly available audio and video data files (MIME types) follows a log-normal distribution over five orders of magnitude.[99]
  • File sizes of 140 million files on personal computers running the Windows OS, collected in 1999.[100][63]
  • Sizes of text-based emails (1990s) and multimedia-based emails (2000s).[63]
  • In computer networks and Internet traffic analysis, log-normal is shown as a good statistical model to represent the amount of traffic per unit time. This has been shown by applying a robust statistical approach on a large groups of real Internet traces. In this context, the log-normal distribution has shown a good performance in two main use cases: (1) predicting the proportion of time traffic will exceed a given level (for service level agreement or link capacity estimation) i.e. link dimensioning based on bandwidth provisioning and (2) predicting 95th percentile pricing.[101]
  • in physical testing when the test produces a time-to-failure of an item under specified conditions, the data is often best analyzed using a lognormal distribution.[102][103]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In probability theory, the log-normal distribution is a continuous probability distribution defined for positive real numbers, where the natural logarithm of the random variable follows a normal distribution.[1] It is parameterized by two values: μ (the mean of the underlying normal distribution) and σ (its standard deviation, with σ > 0), such that if Y ~ N(μ, σ²), then X = e^Y has a log-normal distribution.[2] The probability density function is given by
f(x;μ,σ)=1xσ2πexp((lnxμ)22σ2) f(x; \mu, \sigma) = \frac{1}{x \sigma \sqrt{2\pi}} \exp\left( -\frac{(\ln x - \mu)^2}{2\sigma^2} \right)
for x > 0, and zero otherwise.[1] Key statistical properties distinguish the log-normal distribution from the normal distribution, as it is inherently right-skewed and cannot take negative values, making it suitable for modeling multiplicative processes or phenomena bounded below by zero.[3] The expected value (mean) is E[X] = e^{μ + σ²/2}, while the variance is Var(X) = (e^{σ²} - 1) e^{2μ + σ²}, both of which depend exponentially on the parameters and highlight the distribution's sensitivity to σ for larger spreads.[1] Unlike the normal distribution, it lacks a closed-form moment-generating function but has a cumulative distribution function expressed via the standard normal CDF: F(x) = Φ((ln x - μ)/σ), where Φ is the cumulative distribution function of the standard normal.[2] These properties arise from the exponential transformation, which stretches the positive tail and compresses values near zero. The log-normal distribution was first formally described in 1879 by Francis Galton and Lindsay McAlister in the context of velocity distributions, building on earlier observations of skewed data patterns dating back to the 19th century.[4] It has since become a foundational model in various fields due to its ability to capture real-world data exhibiting multiplicative effects, such as growth rates or error accumulation.[5] In finance, it underpins the modeling of stock prices and asset returns under assumptions like geometric Brownian motion, where returns are normally distributed but prices are log-normally distributed.[3] In reliability engineering, it describes failure times for systems subject to fatigue, corrosion, or degradation, such as cycles-to-failure in materials or repair durations in maintenance.[6] Biological applications include modeling organism sizes, population growth, or species abundance, while in environmental science, it fits distributions like particle sizes or pollutant concentrations (e.g., radon levels in homes).[3] These uses leverage its flexibility for positive, skewed data, often validated through logarithmic transformation to normality.

Definitions

Probability density function

The log-normal distribution is obtained by applying an exponential transformation to a normally distributed random variable. Specifically, if $ Y \sim \mathcal{N}(\mu, \sigma^2) $, then the random variable $ X = \exp(Y) $ follows a log-normal distribution, denoted $ X \sim \mathrm{LN}(\mu, \sigma^2) $.[1] The probability density function (PDF) of $ X $ is derived using the change-of-variable technique from the PDF of $ Y $. Let $ f_Y(y) = \frac{1}{\sigma \sqrt{2\pi}} \exp\left( -\frac{(y - \mu)^2}{2\sigma^2} \right) $ be the PDF of $ Y $. Substituting $ y = \ln x $ and accounting for the Jacobian $ \left| \frac{dy}{dx} \right| = \frac{1}{x} $, the PDF of $ X $ becomes
fX(x)=1xσ2πexp((lnxμ)22σ2) f_X(x) = \frac{1}{x \sigma \sqrt{2\pi}} \exp\left( -\frac{(\ln x - \mu)^2}{2\sigma^2} \right)
for $ x > 0 $, and $ f_X(x) = 0 $ otherwise.[7] In this parameterization, $ \mu \in \mathbb{R} $ represents the location parameter, corresponding to the mean of the underlying normal distribution $ \ln X $, while $ \sigma > 0 $ is the scale parameter, representing the standard deviation of $ \ln X $. This form was formalized in the seminal treatment of the distribution.[1] The PDF has support on the positive real line $ (0, \infty) $ and is positively skewed, with the skewness becoming more pronounced as $ \sigma $ increases, leading to a longer right tail. The mode, which maximizes the PDF, occurs at $ x = \exp(\mu - \sigma^2) $.[6][8] Graphically, the shape of the PDF varies with the parameters. For a fixed $ \sigma $, increasing $ \mu $ shifts the distribution rightward, moving the mode and peak higher along the x-axis without altering the spread. Conversely, for fixed $ \mu $, larger values of $ \sigma $ result in a lower peak, greater dispersion, and increased asymmetry, with the mode shifting leftward relative to the mean.[1]

Cumulative distribution function

The cumulative distribution function (CDF) of a log-normal random variable XX with parameters μR\mu \in \mathbb{R} and σ>0\sigma > 0 is
F(x)={0if x0,Φ(lnxμσ)if x>0, F(x) = \begin{cases} 0 & \text{if } x \leq 0, \\ \Phi\left( \frac{\ln x - \mu}{\sigma} \right) & \text{if } x > 0, \end{cases}
where Φ\Phi denotes the CDF of the standard normal distribution.[6] This form arises because if XX is log-normal, then Y=lnXY = \ln X follows a normal distribution with mean μ\mu and standard deviation σ\sigma, so F(x)=P(Xx)=P(Ylnx)=Φ(lnxμσ)F(x) = P(X \leq x) = P(Y \leq \ln x) = \Phi\left( \frac{\ln x - \mu}{\sigma} \right) for x>0x > 0.[6] The log-normal CDF lacks a closed-form expression independent of special functions and is typically evaluated numerically using algorithms for the standard normal CDF.[6] The standard normal CDF Φ(z)\Phi(z) relates to the error function \erf(z)=2π0zet2dt\erf(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} \, dt via
Φ(z)=12[1+\erf(z2)], \Phi(z) = \frac{1}{2} \left[ 1 + \erf\left( \frac{z}{\sqrt{2}} \right) \right],
yielding
F(x)=12[1+\erf(lnxμσ2)] F(x) = \frac{1}{2} \left[ 1 + \erf\left( \frac{\ln x - \mu}{\sigma \sqrt{2}} \right) \right]
for x>0x > 0.[9] Numerical computation often employs series expansions, continued fractions, or asymptotic approximations for \erf\erf or Φ\Phi, especially for extreme values of the argument.[6] Due to the heavy right tail of the log-normal distribution, the CDF F(x)F(x) approaches 1 slowly as xx \to \infty, reflecting the positive skewness and potential for large outliers.[3] This tail behavior makes the distribution suitable for modeling phenomena like stock prices or particle sizes, where extreme values occur infrequently but impact cumulative probabilities significantly.[3] For illustration, consider the standard log-normal case with μ=0\mu = 0 and σ=1\sigma = 1. Here, F(1)=Φ(0)=0.5F(1) = \Phi(0) = 0.5, corresponding to the median at x=eμ=1x = e^\mu = 1. At x=e2.718x = e \approx 2.718, F(e)=Φ(1)0.8413F(e) = \Phi(1) \approx 0.8413. For a larger value, x=10x = 10, ln102.3026\ln 10 \approx 2.3026, so F(10)=Φ(2.3026)0.9893F(10) = \Phi(2.3026) \approx 0.9893, showing the gradual approach to 1.[6][10]

Parameterization

The log-normal distribution is commonly parameterized by two parameters: μ\mu, the mean of the natural logarithm of the random variable, and σ>0\sigma > 0, the standard deviation of the natural logarithm. These parameters arise naturally because if XX follows a log-normal distribution, then lnX\ln X follows a normal distribution with mean μ\mu and standard deviation σ\sigma.[11][5] An alternative parameterization expresses the distribution in terms of the geometric mean G=eμG = e^{\mu} and the geometric standard deviation S=eσS = e^{\sigma}. The geometric mean GG represents the median of the distribution and serves as a measure of central tendency for multiplicative processes, while SS quantifies the spread on a multiplicative scale, where values greater than 1 indicate variability.[5][12] Another common reparameterization uses the arithmetic mean m=eμ+σ2/2m = e^{\mu + \sigma^2/2} and the variance v=(eσ21)e2μ+σ2v = (e^{\sigma^2} - 1) e^{2\mu + \sigma^2}. Here, mm is the expected value of XX, which exceeds the geometric mean due to the skewness, and vv captures the overall dispersion in the original scale. These moments provide direct links to sample statistics for data fitting.[5][12] Conversions between these parameter sets are straightforward. For instance, starting from the arithmetic mean mm and variance vv, first compute σ2=ln(1+vm2)\sigma^2 = \ln\left(1 + \frac{v}{m^2}\right), then μ=lnmσ22\mu = \ln m - \frac{\sigma^2}{2}. Conversely, from μ\mu and σ\sigma, compute m=eμ+σ2/2m = e^{\mu + \sigma^2/2} and v=m2(eσ21)v = m^2 (e^{\sigma^2} - 1). These relations derive directly from the moment expressions and facilitate switching between scales.[5] The standard μ\mu-σ\sigma parameterization offers mathematical convenience, as operations on lnX\ln X reduce to normal distribution properties, simplifying derivations in theoretical work. In contrast, the geometric mean and standard deviation parameterization enhances interpretability in applications involving multiplicative growth, such as financial modeling of asset returns or biological sizes, where ratios and compounded effects are intuitive. The arithmetic mean and variance form, meanwhile, aligns with conventional summary statistics but can obscure the underlying log-transform nature, potentially complicating analysis of skewed data.[11][5][12]

Characterization

Moments and characteristic function

The moments of a log-normal random variable XX, defined such that lnXN(μ,σ2)\ln X \sim \mathcal{N}(\mu, \sigma^2), are derived by leveraging the moment-generating properties of the underlying normal distribution. Let Y=lnXY = \ln X, so YN(μ,σ2)Y \sim \mathcal{N}(\mu, \sigma^2). The kk-th raw moment is then
E[Xk]=E[ekY]=exp(kμ+k2σ22), E[X^k] = E[e^{k Y}] = \exp\left(k \mu + \frac{k^2 \sigma^2}{2}\right),
which follows directly from evaluating the moment-generating function of YY at point kk.[1] This formula holds for any real k>0k > 0, though it is typically applied for positive integers in moment analysis. In particular, the mean is E[X]=exp(μ+σ2/2)E[X] = \exp(\mu + \sigma^2 / 2).[13] The variance, as a central moment, is obtained using the second raw moment:
Var(X)=E[X2](E[X])2=exp(2μ+2σ2)exp(2μ+σ2)=exp(2μ+σ2)(exp(σ2)1). \text{Var}(X) = E[X^2] - (E[X])^2 = \exp(2\mu + 2\sigma^2) - \exp(2\mu + \sigma^2) = \exp(2\mu + \sigma^2) \left( \exp(\sigma^2) - 1 \right).
Higher-order central moments can be computed similarly from the raw moments, though they grow rapidly due to the heavy-tailed nature of the distribution.[1] Measures of asymmetry and tail heaviness are captured by the skewness and kurtosis, which depend only on σ\sigma and not on μ\mu. The skewness coefficient is
γ1=E[(XE[X])3](Var(X))3/2=(eσ2+2)eσ21, \gamma_1 = \frac{E[(X - E[X])^3]}{( \text{Var}(X) )^{3/2}} = \left( e^{\sigma^2} + 2 \right) \sqrt{ e^{\sigma^2} - 1 },
indicating positive skewness for σ>0\sigma > 0, with the distribution becoming increasingly right-skewed as σ\sigma increases. The kurtosis is
γ2=E[(XE[X])4](Var(X))2=e4σ2+2e3σ2+3e2σ23, \gamma_2 = \frac{E[(X - E[X])^4]}{ ( \text{Var}(X) )^2 } = e^{4\sigma^2} + 2 e^{3\sigma^2} + 3 e^{2\sigma^2} - 3,
which exceeds 3 for σ>0\sigma > 0, reflecting leptokurtosis and heavier tails compared to the normal distribution. These expressions are derived from the first four raw moments using standard formulas for standardized moments.[6][13] The moment-generating function of XX, defined as MX(t)=E[etX]M_X(t) = E[e^{t X}], does not exist in closed form and is infinite for all t>0t > 0, owing to the rapid growth of the tails. However, the raw moments E[Xk]E[X^k] are accessible via the moment-generating function of the underlying normal variable YY, as noted earlier.[1] The characteristic function ϕX(t)=E[eitX]\phi_X(t) = E[e^{i t X}] also lacks a simple closed-form expression. It can be represented as the expectation
ϕX(t)=E[eiteY]=eitey12πσexp((yμ)22σ2)dy, \phi_X(t) = E\left[ e^{i t e^Y} \right] = \int_{-\infty}^{\infty} e^{i t e^y} \cdot \frac{1}{\sqrt{2\pi} \sigma} \exp\left( -\frac{(y - \mu)^2}{2\sigma^2} \right) \, dy,
which requires numerical evaluation or approximation for computation. Series expansions, such as those using Hermite functions, provide rapidly convergent representations for practical use.[1][14] The log-normal distribution is positively skewed when the shape parameter σ>0\sigma > 0, leading to the characteristic inequality that the mean exceeds the median, which in turn exceeds the mode: E[X]>exp(μ)>exp(μσ2)\mathbb{E}[X] > \exp(\mu) > \exp(\mu - \sigma^2).[15] This ordering highlights the distribution's asymmetry, with longer tails on the right, and is a direct consequence of the exponential transformation of the underlying normal distribution.[6] The mode, defined as the value that maximizes the probability density function, occurs at x=exp(μσ2)x = \exp(\mu - \sigma^2).[15] To derive this, one takes the derivative of the density f(x)=1xσ2πexp((lnxμ)22σ2)f(x) = \frac{1}{x \sigma \sqrt{2\pi}} \exp\left( -\frac{(\ln x - \mu)^2}{2\sigma^2} \right) with respect to xx and sets it to zero, yielding the maximizer after simplification.[1] The median, by contrast, is exp(μ)\exp(\mu), which corresponds to the 50th percentile and remains unchanged under the logarithmic transformation because the median of lnX\ln X is μ\mu.[15] The general pp-th quantile (or 100p100p-th percentile) of the log-normal distribution is provided by the inverse cumulative distribution function:
xp=exp(μ+σΦ1(p)), x_p = \exp\left( \mu + \sigma \Phi^{-1}(p) \right),
where Φ1\Phi^{-1} denotes the quantile function of the standard normal distribution.[15] This formula arises from solving F(xp)=pF(x_p) = p, where F(x)=Φ(lnxμσ)F(x) = \Phi\left( \frac{\ln x - \mu}{\sigma} \right), confirming the close relationship to the normal quantiles.[1] In applications such as risk analysis and actuarial science, partial expectations like the conditional expectation E[XX>q]\mathbb{E}[X \mid X > q] quantify tail risks beyond a threshold q>0q > 0. For the log-normal distribution, this is given by
E[XX>q]=exp(μ+σ2/2)[1Φ(lnqμσ2σ)]1Φ(lnqμσ), \mathbb{E}[X \mid X > q] = \frac{\exp(\mu + \sigma^2/2) \left[ 1 - \Phi\left( \frac{\ln q - \mu - \sigma^2}{\sigma} \right) \right]}{1 - \Phi\left( \frac{\ln q - \mu}{\sigma} \right)},
which leverages the mean of the distribution and the normal cumulative distribution function Φ\Phi.[16] This measure is particularly useful for modeling exceedances in financial losses or insurance claims, where the heavy right tail amplifies potential impacts.[16]

Properties

Domain probabilities and transformations

The log-normal distribution is defined on the positive real line, with support x>0x > 0, and probabilities over intervals within this domain are computed using the cumulative distribution function (CDF). For a log-normal random variable X\LN(μ,σ2)X \sim \LN(\mu, \sigma^2), the probability that XX falls between two positive values a>0a > 0 and b>ab > a is given by P(a<X<b)=F(b)F(a)P(a < X < b) = F(b) - F(a), where the CDF is F(x)=Φ(lnxμσ)F(x) = \Phi\left( \frac{\ln x - \mu}{\sigma} \right) and Φ\Phi denotes the standard normal CDF.[6][17] This difference leverages the monotonicity of the CDF to quantify the likelihood of XX lying within specified bounds, which is particularly useful for modeling bounded positive outcomes such as particle sizes or income levels. The survival function, which gives the probability of exceeding a threshold x>0x > 0, is P(X>x)=1F(x)=1Φ(lnxμσ)P(X > x) = 1 - F(x) = 1 - \Phi\left( \frac{\ln x - \mu}{\sigma} \right).[6][17] Equivalently, this can be expressed using the standard normal survival function as Φˉ(lnxμσ)\bar{\Phi}\left( \frac{\ln x - \mu}{\sigma} \right), where Φˉ(z)=1Φ(z)\bar{\Phi}(z) = 1 - \Phi(z). This tail probability is essential for assessing rare events in the right tail of the distribution, given its positive skewness. A defining property of the log-normal distribution is its closure under certain transformations, stemming from the normality of lnX\ln X. Specifically, lnXN(μ,σ2)\ln X \sim \N(\mu, \sigma^2), so the natural logarithm transforms the log-normal variable to a normal one, facilitating easier computation of moments or simulations.[6][17] For powers, if r>0r > 0, then Xr\LN(rμ,r2σ2)X^r \sim \LN(r\mu, r^2 \sigma^2), preserving the log-normal family with scaled parameters; this follows because ln(Xr)=rlnXN(rμ,r2σ2)\ln(X^r) = r \ln X \sim \N(r\mu, r^2 \sigma^2).[17] The reciprocal transformation yields 1/X\LN(μ,σ2)1/X \sim \LN(-\mu, \sigma^2), as ln(1/X)=lnXN(μ,σ2)\ln(1/X) = -\ln X \sim \N(-\mu, \sigma^2), which is useful for modeling inverse processes like failure rates.[6][17] In reliability engineering, the log-normal distribution models failure times of components, where the survival function computes the probability of exceeding a design lifetime threshold. For instance, if failure times follow \LN(μ,σ2)\LN(\mu, \sigma^2), then P(T>t0)=1Φ(lnt0μσ)P(T > t_0) = 1 - \Phi\left( \frac{\ln t_0 - \mu}{\sigma} \right) estimates the reliability beyond t0t_0, aiding in setting safety margins for systems like mechanical parts under fatigue.[6]

Arithmetic and geometric moments

The arithmetic mean of a log-normal random variable XX with parameters μ\mu and σ2\sigma^2, denoted E[X]\mathbb{E}[X], is exp(μ+σ22)\exp\left(\mu + \frac{\sigma^2}{2}\right), while the geometric mean is exp(μ)\exp(\mu).[18][19] These differ due to the right-skewed nature of the distribution, where the arithmetic mean exceeds the geometric mean by the factor exp(σ2/2)\exp(\sigma^2/2), reflecting the influence of the tail on the expectation.[18] For skewed data modeled by the log-normal distribution, the arithmetic mean introduces upward bias compared to the geometric mean, as captured by the inequality E[X]exp(E[lnX])\mathbb{E}[X] \geq \exp(\mathbb{E}[\ln X]), with equality holding only when σ=0\sigma = 0.[18] This follows from Jensen's inequality applied to the convex exponential function and underscores why the geometric mean provides a more stable central tendency measure for multiplicative processes underlying log-normality.[19] Geometric moments arise naturally in the context of products of independent log-normal variables XiLN(μi,σi2)X_i \sim \mathrm{LN}(\mu_i, \sigma_i^2), where E[i=1nXi]=exp(i=1nμi+12i=1nσi2)\mathbb{E}\left[\prod_{i=1}^n X_i\right] = \exp\left(\sum_{i=1}^n \mu_i + \frac{1}{2} \sum_{i=1}^n \sigma_i^2\right), leveraging the additive property of logarithms to preserve the log-normal form for the product.[19] This multiplicative structure highlights the suitability of geometric moments for aggregating variables in scenarios involving compounded growth or successive proportions.[20] In applications, the geometric mean is preferred over the arithmetic mean for averaging rates of return in finance, where asset prices follow log-normal dynamics, ensuring accurate representation of compounded performance without overestimation from volatility.[20] Similarly, in natural sciences such as aerosol physics, the geometric mean characterizes particle sizes under log-normal distributions, providing a robust measure for skewed size spectra in processes like coagulation or sedimentation.[21]

Heavy tails and limit theorems

The log-normal distribution is characterized by a heavy right tail, meaning its survival function decays more slowly than exponentially. For a log-normal random variable XX with parameters μR\mu \in \mathbb{R} and σ>0\sigma > 0, the tail probability satisfies
P(X>x)ϕ(lnxμσ)σx P(X > x) \sim \frac{\phi\left( \frac{\ln x - \mu}{\sigma} \right)}{\sigma x}
as xx \to \infty, where ϕ(z)=(2π)1/2exp(z2/2)\phi(z) = (2\pi)^{-1/2} \exp(-z^2/2) is the standard normal density function. This asymptotic form arises from the tail behavior of the underlying normal distribution for lnX\ln X, combined with the transformation X=elnXX = e^{\ln X}, and places the log-normal in the class of subexponential distributions. Unlike distributions with exponential tails (e.g., the gamma or exponential), this slower decay implies a higher probability of extreme values, which is relevant in modeling phenomena like stock returns or particle sizes where outliers are common.[22] A key reason for the prevalence of the log-normal distribution is its emergence in the multiplicative central limit theorem. Consider a sequence of independent and identically distributed positive random variables Zi>0Z_i > 0 (i=1,2,,ni=1,2,\dots,n) such that E[lnZi]=νE[\ln Z_i] = \nu and Var(lnZi)=τ2<\mathrm{Var}(\ln Z_i) = \tau^2 < \infty. The logarithm of their product, ln(i=1nZi)=i=1nlnZi\ln\left( \prod_{i=1}^n Z_i \right) = \sum_{i=1}^n \ln Z_i, is a sum of i.i.d. random variables with finite mean and variance. By the classical central limit theorem, after appropriate centering and scaling,
1n(i=1nlnZinν)dN(0,τ2) \frac{1}{\sqrt{n}} \left( \sum_{i=1}^n \ln Z_i - n \nu \right) \xrightarrow{d} \mathcal{N}(0, \tau^2)
as nn \to \infty, implying that i=1nZi\prod_{i=1}^n Z_i converges in distribution (after normalization) to a log-normal random variable with parameters μ=nν\mu = n \nu and σ=nτ\sigma = \sqrt{n} \tau. This theorem explains why log-normal distributions often approximate outcomes of multiplicative processes, such as growth models in biology or economics, where many small independent factors accumulate multiplicatively. In terms of tail heaviness, the log-normal occupies an intermediate position compared to other heavy-tailed families. Its tails are heavier than those of light-tailed distributions like the normal or exponential but lighter than power-law tails in the Pareto distribution, where P(X>x)cxαP(X > x) \sim c x^{-\alpha} for some α>0\alpha > 0, or in α\alpha-stable distributions with index α<2\alpha < 2. The Pareto and stable distributions exhibit polynomial decay, leading to infinite moments beyond order α\alpha, whereas the log-normal retains finite moments of all orders due to the Gaussian nature of lnX\ln X. This distinction is crucial: while the log-normal captures moderate extremes without diverging moments, generalizations like the generalized log-normal or certain infinite-variance cases may yield infinite higher moments, amplifying tail risks in applications such as risk assessment.[23]

Transformations and combinations

The log-normal distribution exhibits closure under certain multiplicative transformations, making it particularly suitable for modeling phenomena involving products or ratios of positive random variables. If X1LN(μ1,σ12)X_1 \sim \mathrm{LN}(\mu_1, \sigma_1^2) and X2LN(μ2,σ22)X_2 \sim \mathrm{LN}(\mu_2, \sigma_2^2) are independent, their product X1X2X_1 X_2 follows a log-normal distribution with parameters μ1+μ2\mu_1 + \mu_2 and σ12+σ22\sigma_1^2 + \sigma_2^2. This property extends to the product of any finite number of independent log-normal variables, where the resulting parameters are the sums of the individual means and variances of the underlying normals. Similarly, the quotient X1/X2X_1 / X_2 is log-normally distributed with parameters μ1μ2\mu_1 - \mu_2 and σ12+σ22\sigma_1^2 + \sigma_2^2, as the logarithm of the ratio corresponds to the difference of independent normals. Raising a log-normal variable to a power also preserves the family. For XLN(μ,σ2)X \sim \mathrm{LN}(\mu, \sigma^2) and constant r0r \neq 0, the transformed variable XrX^r follows LN(rμ,r2σ2)\mathrm{LN}(r \mu, r^2 \sigma^2). This follows directly from the exponential form, since ln(Xr)=rlnXN(rμ,r2σ2)\ln(X^r) = r \ln X \sim \mathrm{N}(r \mu, r^2 \sigma^2). More generally, affine transformations of the form aXra X^r (with a>0a > 0) yield a log-normal with adjusted location parameter lna+rμ\ln a + r \mu. In contrast, the sum of independent log-normal variables does not admit a closed-form distribution in general. While the sum S=X1+X2++XnS = X_1 + X_2 + \cdots + X_n of independent log-normals lacks an exact expression, it is often approximated by another log-normal distribution via moment-matching methods, such as the Fenton-Wilkinson approximation, which equates the first two moments of the sum to those of a fitting log-normal. Mixture models or numerical methods can also provide further approximations for the distribution of SS. These transformations find application in error propagation for multiplicative models, common in engineering and physics, where uncertainties in measurements multiply rather than add. For instance, in propagating relative errors through products of instrument readings, the resulting uncertainty distribution is log-normal, facilitating variance calculations via the summed variances of the logs.

Multivariate extensions

The multivariate log-normal distribution extends the univariate log-normal to a vector of random variables that are positive and jointly distributed with log-normal marginals and potentially correlated components. Specifically, a pp-dimensional random vector X=(X1,,Xp)\mathbf{X} = (X_1, \dots, X_p)^\top follows a multivariate log-normal distribution, denoted XLNp(μ,Σ)\mathbf{X} \sim \mathrm{LN}_p(\boldsymbol{\mu}, \boldsymbol{\Sigma}), if Y=logX=(logX1,,logXp)\mathbf{Y} = \log \mathbf{X} = (\log X_1, \dots, \log X_p)^\top follows a multivariate normal distribution YMVNp(μ,Σ)\mathbf{Y} \sim \mathrm{MVN}_p(\boldsymbol{\mu}, \boldsymbol{\Sigma}), where μRp\boldsymbol{\mu} \in \mathbb{R}^p is the mean vector and Σ\boldsymbol{\Sigma} is the p×pp \times p positive definite covariance matrix.[24][25] The joint probability density function of X\mathbf{X} is
f(x)=(2π)p/2Σ1/2(i=1pxi1)exp(12(logxμ)Σ1(logxμ)), f(\mathbf{x}) = (2\pi)^{-p/2} |\boldsymbol{\Sigma}|^{-1/2} \left( \prod_{i=1}^p x_i^{-1} \right) \exp\left( -\frac{1}{2} (\log \mathbf{x} - \boldsymbol{\mu})^\top \boldsymbol{\Sigma}^{-1} (\log \mathbf{x} - \boldsymbol{\mu}) \right),
for xi>0x_i > 0 and x=(x1,,xp)\mathbf{x} = (x_1, \dots, x_p)^\top, with the understanding that this expression, while explicit, does not simplify to a product of marginal densities due to the dependence induced by Σ\boldsymbol{\Sigma}.[24] Each marginal XiX_i follows a univariate log-normal distribution LN(μi,σi2)\mathrm{LN}(\mu_i, \sigma_i^2), where σi2=Σii\sigma_i^2 = \Sigma_{ii}.[24] The conditional distribution of any subset of components given the others is also multivariate log-normal, as it inherits the conditional normality of the underlying Y\mathbf{Y}.[26] The covariance structure reflects the exponential transformation: for iji \neq j,
Cov(Xi,Xj)=exp(μi+μj+12(σi2+σj2))(exp(Σij)1), \mathrm{Cov}(X_i, X_j) = \exp(\mu_i + \mu_j + \frac{1}{2}(\sigma_i^2 + \sigma_j^2)) \left( \exp(\Sigma_{ij}) - 1 \right),
which captures the positive dependence possible under this model, with Cov(Xi,Xi)=Var(Xi)=exp(2μi+σi2)(exp(σi2)1)\mathrm{Cov}(X_i, X_i) = \mathrm{Var}(X_i) = \exp(2\mu_i + \sigma_i^2) (\exp(\sigma_i^2) - 1).[24][25] This distribution is particularly useful in modeling dependent positive variables, such as in financial returns or environmental measurements, where the dependence structure aligns with a Gaussian copula derived from the normal logs.[27]

Statistical inference

Parameter estimation

The maximum likelihood estimator (MLE) for the parameters μ\mu and σ2\sigma^2 of a log-normal distribution, based on a sample x1,,xn>0x_1, \dots, x_n > 0, is derived from the log-likelihood function
lnL(μ,σ2)=n2ln(2π)i=1nlnxi12σ2i=1n(lnxiμ)2, \ln L(\mu, \sigma^2) = -\frac{n}{2} \ln (2\pi) - \sum_{i=1}^n \ln x_i - \frac{1}{2\sigma^2} \sum_{i=1}^n (\ln x_i - \mu)^2,
which simplifies to closed-form expressions μ^=1ni=1nlnxi\hat{\mu} = \frac{1}{n} \sum_{i=1}^n \ln x_i (the arithmetic mean of the logged observations) and σ^2=1ni=1n(lnxiμ^)2\hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^n (\ln x_i - \hat{\mu})^2 (the sample variance of the logged observations).[28] These estimators exploit the fact that if XLogNormal(μ,σ2)X \sim \mathrm{LogNormal}(\mu, \sigma^2), then lnXNormal(μ,σ2)\ln X \sim \mathrm{Normal}(\mu, \sigma^2), reducing the problem to standard normal MLE.[28] The method of moments (MOM) estimator matches the population mean E[X]=eμ+σ2/2E[X] = e^{\mu + \sigma^2/2} and variance Var(X)=e2μ+σ2(eσ21)\mathrm{Var}(X) = e^{2\mu + \sigma^2}(e^{\sigma^2} - 1) to the sample mean xˉ\bar{x} and sample variance s2s^2, respectively. This leads to nonlinear equations solved as μ^=lnxˉ12ln(1+s2/xˉ2)\hat{\mu} = \ln \bar{x} - \frac{1}{2} \ln(1 + s^2 / \bar{x}^2) and σ^2=ln(1+s2/xˉ2)\hat{\sigma}^2 = \ln(1 + s^2 / \bar{x}^2). MOM is computationally simpler than MLE but generally less statistically efficient, particularly for small samples or when σ2\sigma^2 is large. Other approaches include minimum chi-square estimation, which minimizes the Pearson chi-square statistic between observed and expected frequencies under binned data to estimate μ\mu and σ2\sigma^2, offering robustness to outliers compared to MLE in some grouped data scenarios.[29] Bayesian estimation employs conjugate priors on the log scale, such as the normal-inverse-gamma distribution for (μ,σ2)(\mu, \sigma^2), yielding a posterior that is also normal-inverse-gamma and enabling credible intervals via marginal posteriors.[30] Comparisons of bias and efficiency reveal that the MLE μ^\hat{\mu} is unbiased, while σ^2\hat{\sigma}^2 is biased downward (with bias approximately σ2/n-\sigma^2 / n); bias-corrected variants improve finite-sample performance. MOM estimators exhibit higher bias and mean squared error than MLE across various sample sizes and parameter values, though MOM remains preferable for quick approximations due to its explicit formulas. For censored or truncated data, such as type I right-censored observations common in reliability studies, MLE adapts by incorporating survival terms into the likelihood (e.g., integrating the density from the censor point to infinity), often requiring numerical optimization since closed forms are unavailable.[31] MOM can be adjusted using conditional moments but loses efficiency; Bayesian methods handle censoring naturally through the likelihood while incorporating prior information.[31]

Interval estimation

Interval estimation for the parameters of the log-normal distribution relies on the fact that if XLN(μ,σ2)X \sim \mathrm{LN}(\mu, \sigma^2), then lnXN(μ,σ2)\ln X \sim N(\mu, \sigma^2), allowing transformations to normal theory for constructing confidence intervals.[32] A confidence interval for μ\mu is obtained using the sample mean and standard deviation of the logged observations yi=lnxiy_i = \ln x_i, i=1,,ni=1,\dots,n: let yˉ=n1yi\bar{y} = n^{-1} \sum y_i and s2=(n1)1(yiyˉ)2s^2 = (n-1)^{-1} \sum (y_i - \bar{y})^2; then the (1α)(1-\alpha) confidence interval is yˉ±tn1,1α/2s/n\bar{y} \pm t_{n-1,1-\alpha/2} \, s / \sqrt{n}, where tn1,1α/2t_{n-1,1-\alpha/2} is the (1α/2)(1-\alpha/2)-quantile of the tt-distribution with n1n-1 degrees of freedom.[33] This interval achieves exact coverage under the normality assumption for lnX\ln X.[34] Confidence intervals for σ2\sigma^2 are based on the fact that (n1)s2/σ2χn12(n-1)s^2 / \sigma^2 \sim \chi^2_{n-1}, yielding the exact (1α)(1-\alpha) interval
(n1)s2χn1,1α/22<σ2<(n1)s2χn1,α/22, \frac{(n-1)s^2}{\chi^2_{n-1, 1-\alpha/2}} < \sigma^2 < \frac{(n-1)s^2}{\chi^2_{n-1, \alpha/2}},
where χn1,p2\chi^2_{n-1, p} is the pp-quantile of the chi-squared distribution with n1n-1 degrees of freedom, and s2s^2 is the sample variance of the yiy_i. For σ\sigma, take square roots of the bounds.[35] Since the median of the log-normal distribution is exp(μ)\exp(\mu), the confidence interval for the median is the exponential of the interval for μ\mu: exp(yˉ±tn1,1α/2s/n)\exp(\bar{y} \pm t_{n-1,1-\alpha/2} \, s / \sqrt{n}).[33] This transformation preserves the monotonicity and provides an exact interval for the median.[34] For the mean E[X]=exp(μ+σ2/2)E[X] = \exp(\mu + \sigma^2/2), approximate confidence intervals can be constructed using the delta method on the estimates μ^=yˉ\hat{\mu} = \bar{y} and σ^2=s2\hat{\sigma}^2 = s^2, yielding an asymptotic normal interval centered at exp(μ^+σ^2/2)\exp(\hat{\mu} + \hat{\sigma}^2/2) with standard error derived from the variance-covariance matrix of (μ^,σ^2)(\hat{\mu}, \hat{\sigma}^2).[33] More precise intervals, especially in small samples, employ Fieller's theorem, which inverts a quadratic form to obtain exact coverage by solving for values of the mean where a pivotal statistic exceeds a critical value, often resulting in intervals that may be unbounded if the coefficient of variation is large.[36] Generalized confidence intervals, based on Monte Carlo simulation of pivotal quantities involving normal and chi-squared random variables, provide good coverage (near 95%) even for n=5n=5.[32] Prediction intervals for a future observation Xn+1X_{n+1} from a log-normal distribution are derived by first constructing a prediction interval for lnXn+1N(μ,σ2)\ln X_{n+1} \sim N(\mu, \sigma^2), which is yˉ±tn1,1α/2s1+1/n\bar{y} \pm t_{n-1,1-\alpha/2} \, s \sqrt{1 + 1/n}, and then exponentiating the bounds to obtain the interval for Xn+1X_{n+1}.[37] This approach accounts for both estimation uncertainty and inherent variability, yielding asymmetric intervals reflective of the log-normal's skewness.[38] When comparing two independent log-normal distributions, say XLN(μ1,σ12)X \sim \mathrm{LN}(\mu_1, \sigma_1^2) and YLN(μ2,σ22)Y \sim \mathrm{LN}(\mu_2, \sigma_2^2), a confidence interval for the difference in medians exp(μ1)exp(μ2)\exp(\mu_1) - \exp(\mu_2) can be obtained via parametric bootstrap: generate bootstrap samples from fitted distributions, compute the difference in sample medians for each, and take the appropriate percentiles of the empirical distribution of these differences.[39] This method performs well in small samples, offering coverage probabilities close to nominal levels and shorter average lengths than normal approximation or fiducial generalized intervals when variances differ substantially.[39]

Applications

Natural and social sciences

In biology, the log-normal distribution frequently describes the sizes of particles such as pollen grains and suspended matter in natural environments, where multiplicative growth processes lead to skewed positive values.[5] It also models cell volumes in various organisms, reflecting heterogeneous proliferation rates that result in a broad range of sizes within populations.[40] In ecology, species abundances in communities are often characterized by Preston's log-normal model, which posits a "veil" effect where sampling reveals progressively more rare species, fitting empirical data from diverse habitats like forests and grasslands.[41] In medicine, the log-normal distribution applies to pharmacokinetics, where drug concentrations in plasma over time exhibit log-normal patterns due to proportional absorption and elimination processes, aiding in dosing predictions for antibiotics and other therapeutics.[42] Tumor sizes in oncology studies similarly follow log-normal distributions, capturing the variable growth dynamics influenced by multiplicative cellular divisions, as observed in models of solid tumors like breast and lung cancers.[40] In chemistry, reaction times for certain processes, such as polymerization or enzymatic reactions, are modeled as log-normal owing to the compounding effects of multiple rate-limiting steps.[43] Isotope ratios in natural samples, including stable isotopes in geochemical cycles, often display log-normal variability stemming from fractionation processes that multiply small probabilistic differences.[5] Aerosol size distributions in atmospheric chemistry are classically fitted to log-normal forms, representing the nucleation and coagulation mechanisms that produce a skewed spectrum from fine to coarse particles.[44] In the social sciences, income and wealth distributions conform to log-normal patterns under Gibrat's law, which assumes proportionate growth independent of size, explaining the observed skewness in household earnings across economies.[45] City sizes approximate a log-normal distribution, providing a basis for Zipf's law in the upper tail, as growth through mergers and expansions follows multiplicative dynamics in urban systems.[46] The heavy tails of this distribution contribute to economic inequality by amplifying disparities over time.[47] In demographics, lifespan data from human populations are frequently log-normally distributed, accounting for the accelerating mortality rates after an initial period of relative stability, as seen in actuarial studies of life expectancy.[48]

Engineering and finance

In physical sciences, the log-normal distribution models rainfall amounts, which arise from multiplicative accumulation processes in atmospheric dynamics, often exhibiting positive skew and heavy tails that capture extreme precipitation events.[49] In technology applications, reliability engineering employs the log-normal distribution to describe failure times of components, such as electronic devices, where degradation accumulates multiplicatively over time, leading to a decreasing hazard rate initially followed by wear-out failures.[50] In signal processing, log-normal models represent noise and shadowing effects in wireless communications, accounting for path loss variations that multiply signal amplitudes and produce log-scale normality in received power levels.[51] Financial modeling relies heavily on the log-normal distribution for stock returns, which result from successive multiplicative shocks in asset prices, ensuring non-negative values and geometric growth patterns observed in market data.[52] The Black-Scholes model for option pricing assumes underlying asset prices evolve via geometric Brownian motion, implying log-normal distributions at expiration to derive closed-form valuation formulas under risk-neutral measures.[53] Volatility clustering, a stylized fact in financial time series where large changes follow large changes, is often captured by log-normal specifications for volatility processes, enabling multiscale analysis of return heteroskedasticity.[54] In human behavior studies, response times in psychological tasks, such as reaction experiments, are commonly fitted with log-normal distributions to handle right-skewed data reflecting variable cognitive processing speeds.[55] Word frequencies in linguistics adhere to patterns akin to Zipf's law, which can be derived from underlying log-normal distributions of lexical usage, explaining the power-law decay in rank-frequency plots across languages.[56] Recent advancements in machine learning incorporate log-normal priors in Bayesian models for uncertainty quantification, particularly in AI applications like neural network pruning and formal verification, where they model positive-valued parameters such as weights or prediction errors post-2020.[57]

References

User Avatar
No comments yet.