Recent from talks
Contribute something
Nothing was collected or created yet.
Log–log plot
View on WikipediaThis log graph papers and their use needs additional citations for verification. (August 2025) |

Note the logarithmic scale markings on each of the axes, and that the log x and log y axes (where the logarithms are 0) are where x and y themselves are 1.

In science and engineering, a log–log graph or log–log plot is a two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Power functions – relationships of the form – appear as straight lines in a log–log graph, with the exponent corresponding to the slope, and the coefficient corresponding to the intercept. Thus these graphs are very useful for recognizing these relationships and estimating parameters. Any base can be used for the logarithm, though most commonly base 10 (common logs) are used.
Relation with monomials
[edit]Given a monomial equation taking the logarithm of the equation (with any base) yields:
Setting and which corresponds to using a log–log graph, yields the equation
where m = k is the slope of the line (gradient) and b = log a is the intercept on the (log y)-axis, meaning where log x = 0, so, reversing the logs, a is the y value corresponding to x = 1.[1]
Equations
[edit]The equation for a line on a log–log scale would be: where m is the slope and b is the intercept point on the log plot.
Slope of a log–log plot
[edit]To find the slope of the plot, two points are selected on the x-axis, say x1 and x2. Using the below equation: and The slope m is found taking the difference: where F1 is shorthand for F(x1) and F2 is shorthand for F(x2). The figure at right illustrates the formula. Notice that the slope in the example of the figure is negative. The formula also provides a negative slope, as can be seen from the following property of the logarithm:
Finding the function from the log–log plot
[edit]The above procedure now is reversed to find the form of the function F(x) using its (assumed) known log–log plot. To find the function F, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. Then from the slope formula above: which leads to Notice that 10log10(F1) = F1. Therefore, the logs can be inverted to find: or which means that In other words, F is proportional to x to the power of the slope of the straight line of its log–log graph. Specifically, a straight line on a log–log plot containing points (x0, F0) and (x1, F1) will have the function: Of course, the inverse is true too: any function of the form will have a straight line as its log–log graph representation, where the slope of the line is m.
Finding the area under a straight-line segment of log–log plot
[edit]To calculate the area under a continuous, straight-line segment of a log–log plot (or estimating an area of an almost-straight line), take the function defined previously and integrate it. Since it is only operating on a definite integral (two defined endpoints), the area A under the plot takes the form
Rearranging the original equation and plugging in the fixed point values, it is found that
Substituting back into the integral, you find that for A over x0 to x1
Therefore,
For m = −1, the integral becomes
Log-log linear regression models
[edit]Log–log plots are often use for visualizing log-log linear regression models with (roughly) log-normal, or Log-logistic, errors. In such models, after log-transforming the dependent and independent variables, a Simple linear regression model can be fitted, with the errors becoming homoscedastic. This model is useful when dealing with data that exhibits exponential growth or decay, while the errors continue to grow as the independent value grows (i.e., heteroscedastic error).
As above, in a log-log linear model the relationship between the variables is expressed as a power law. Every unit change in the independent variable will result in a constant percentage change in the dependent variable. The model is expressed as:
Taking the logarithm of both sides, we get:
This is a linear equation in the logarithms of and , with as the intercept and as the slope. In which , and .

Figure 1 illustrates how this looks. It presents two plots generated using 10,000 simulated points. The left plot, titled 'Concave Line with Log-Normal Noise', displays a scatter plot of the observed data (y) against the independent variable (x). The red line represents the 'Median line', while the blue line is the 'Mean line'. This plot illustrates a dataset with a power-law relationship between the variables, represented by a concave line.
When both variables are log-transformed, as shown in the right plot of Figure 1, titled 'Log-Log Linear Line with Normal Noise', the relationship becomes linear. This plot also displays a scatter plot of the observed data against the independent variable, but after both axes are on a logarithmic scale. Here, both the mean and median lines are the same (red) line. This transformation allows us to fit a Simple linear regression model (which can then be transformed back to the original scale - as the median line).

The transformation from the left plot to the right plot in Figure 1 also demonstrates the effect of the log transformation on the distribution of noise in the data. In the left plot, the noise appears to follow a log-normal distribution, which is right-skewed and can be difficult to work with. In the right plot, after the log transformation, the noise appears to follow a normal distribution, which is easier to reason about and model.
This normalization of noise is further analyzed in Figure 2, which presents a line plot of three error metrics (Mean Absolute Error - MAE, Root Mean Square Error - RMSE, and Mean Absolute Logarithmic Error - MALE) calculated over a sliding window of size 28 on the x-axis. The y-axis gives the error, plotted against the independent variable (x). Each error metric is represented by a different color, with the corresponding smoothed line overlaying the original line (since this is just simulated data, the error estimation is a bit jumpy). These error metrics provide a measure of the noise as it varies across different x values.
Log-log linear models are widely used in various fields, including economics, biology, and physics, where many phenomena exhibit power-law behavior. They are also useful in regression analysis when dealing with heteroscedastic data, as the log transformation can help to stabilize the variance.
Applications
[edit]
These graphs are useful when the parameters a and b need to be estimated from numerical data. Specifications such as this are used frequently in economics.
One example is the estimation of money demand functions based on inventory theory, in which it can be assumed that money demand at time t is given by where M is the real quantity of money held by the public, R is the rate of return on an alternative, higher yielding asset in excess of that on money, Y is the public's real income, U is an error term assumed to be lognormally distributed, A is a scale parameter to be estimated, and b and c are elasticity parameters to be estimated. Taking logs yields where m = log M, a = log A, r = log R, y = log Y, and u = log U with u being normally distributed. This equation can be estimated using ordinary least squares.
Another economic example is the estimation of a firm's Cobb–Douglas production function, which is the right side of the equation in which Q is the quantity of output that can be produced per month, N is the number of hours of labor employed in production per month, K is the number of hours of physical capital utilized per month, U is an error term assumed to be lognormally distributed, and A, , and are parameters to be estimated. Taking logs gives the linear regression equation where q = log Q, a = log A, n = log N, k = log K, and u = log U.
Log–log regression can also be used to estimate the fractal dimension of a naturally occurring fractal.
However, going in the other direction – observing that data appears as an approximate line on a log–log scale and concluding that the data follows a power law – is not always valid.[2]
In fact, many other functional forms appear approximately linear on the log–log scale, and simply evaluating the goodness of fit of a linear regression on logged data using the coefficient of determination (R2) may be invalid, as the assumptions of the linear regression model, such as Gaussian error, may not be satisfied; in addition, tests of fit of the log–log form may exhibit low statistical power, as these tests may have low likelihood of rejecting power laws in the presence of other true functional forms. While simple log–log plots may be instructive in detecting possible power laws, and have been used dating back to Pareto in the 1890s, validation as a power laws requires more sophisticated statistics.[2]
These graphs are also extremely useful when data are gathered by varying the control variable along an exponential function, in which case the control variable x is more naturally represented on a log scale, so that the data points are evenly spaced, rather than compressed at the low end. The output variable y can either be represented linearly, yielding a lin–log graph (log x, y), or its logarithm can also be taken, yielding the log–log graph (log x, log y).
Bode plot (a graph of the frequency response of a system) is also log–log plot.
In chemical kinetics, the general form of the dependence of the reaction rate on concentration takes the form of a power law (law of mass action), so a log-log plot is useful for estimating the reaction parameters from experiment.
See also
[edit]References
[edit]- ^ Bourne, Murray. "7. Log-Log and Semi-log Graphs". www.intmath.com. Retrieved 2024-10-15.
- ^ a b Clauset, A.; Shalizi, C. R.; Newman, M. E. J. (2009). "Power-Law Distributions in Empirical Data". SIAM Review. 51 (4): 661–703. arXiv:0706.1062. Bibcode:2009SIAMR..51..661C. doi:10.1137/070710111. S2CID 9155618.
Log–log plot
View on GrokipediaFundamentals
Definition
A log–log plot is a two-dimensional graph that employs logarithmic scales on both the horizontal (x-axis) and vertical (y-axis), allowing data points to be plotted by taking the logarithm of their coordinates rather than the raw values.[5] These scales are typically base-10 for practical plotting, though natural logarithms (base-e) can also be used depending on the context.[6] The logarithmic scaling compresses large ranges of values into a more manageable visual space, making it ideal for datasets that span several orders of magnitude, such as population growth or physical measurements in science and engineering. By applying logarithms to both variables, a log–log plot transforms multiplicative relationships in the original data into additive ones, simplifying the analysis of proportional changes.[7] For example, if the data follows a power-law relationship , where k is a constant and a is the exponent, the plot becomes a straight line described by the equation , with the slope representing the exponent a and the y-intercept representing .[8] The use of log–log plots dates back to 1844, when French engineer Léon Lalanne created the first such plot as part of his graphical calculator for simplifying computations.[9] Logarithmic graph paper, which facilitated manual construction of such graphs without calculators by pre-scaling the axes logarithmically, was popularized in the late 19th and early 20th centuries.[10] This built on earlier 19th-century developments in graphical methods and became a standard tool in scientific visualization during the expansion of engineering and statistical practices.Comparison to Other Logarithmic Plots
A semi-log plot, also known as a semilogarithmic graph, features a logarithmic scale on one axis—typically the vertical (y-axis)—and a linear scale on the other (x-axis). This type of plot is particularly useful for visualizing exponential relationships, such as growth or decay processes, where the dependent variable changes multiplicatively over time or another independent variable. For instance, in modeling population dynamics, exponential growth of the form appears as a straight line on a semi-log plot, facilitating easier identification of the growth rate without transformation computations.[5][11] In contrast, linear plots, with both axes on arithmetic scales, often fail to effectively display data that span several orders of magnitude, as small values become visually compressed near the origin while large values dominate the graph, obscuring underlying patterns or trends. Log-log plots address this by applying logarithmic scales to both axes, which proportionally compresses the entire range of values, allowing for clearer visualization of relative changes across wide scales. This dual logarithmic transformation is especially advantageous for revealing power-law relationships that remain hidden or distorted in linear or semi-log representations, as it linearizes functions of the form without requiring manual data adjustment.[12][13] A practical distinction arises in biological applications: for exponential population growth, a semi-log plot highlights the constant proportional increase, producing a linear trend that simplifies rate estimation. However, in allometric scaling—such as the relationship between an animal's body mass and metabolic rate, which follows a power law—a log-log plot is preferred, as it transforms the nonlinear scaling into a straight line, enabling precise assessment of the exponent that describes how traits vary with size. This choice underscores log-log plots' superiority for multiplicative, scale-invariant phenomena over the additive focus of linear plots or the one-sided exponential emphasis of semi-log plots.[11][14]Mathematical Properties
Relation to Monomials and Power Laws
A log–log plot provides a direct visualization of monomial relationships, which take the form , where is a constant and is the exponent.[15] This functional form encompasses power laws, where the dependent variable scales proportionally to a power of the independent variable. Taking the logarithm (typically base 10 or natural) of both sides transforms the equation into , which represents a straight line in the log–log space with slope and y-intercept .[15] Consequently, data adhering to such a monomial appear as a linear trend on the plot, facilitating the identification and analysis of power-law behaviors that might be obscured in linear scales.[16] Power laws modeled by monomials are ubiquitous in natural systems, often emerging from fundamental physical or biological principles. In physics, the gravitational force between two point masses follows Newton's inverse square law, , a power law with exponent where is the distance.[17] In biology, Kleiber's law describes how an organism's basal metabolic rate scales with body mass as , reflecting quarter-power scaling observed across diverse taxa from unicellular organisms to mammals.[18] These examples illustrate how log–log plots linearize such relationships, revealing the underlying exponents that govern scaling in nature.[19] The monomial framework generalizes to products of powers, such as , where multiple independent variables are involved; the logarithmic transformation yields , maintaining linearity in the log-transformed coordinates.[20] This extends the utility of log–log plots to multivariate power laws, common in systems like allometric scaling where traits depend on several size metrics. For more complex functions, such as polynomials, log–log representations can approximate local power-law behaviors over limited ranges, aiding in the extraction of dominant scaling exponents.[21] The slope in these linearized forms carries interpretive significance regarding the scaling relationship, as detailed in subsequent analyses of slope meaning.Slope Interpretation
In a log-log plot, the slope of the fitted line represents the exponent in a power-law relationship , derived from the transformation .[15] This slope is calculated as , where the logarithmic units cancel out, yielding a dimensionless value that directly quantifies the scaling behavior.[15] A straight line on the plot confirms the power-law form, with the slope indicating the power of the relationship.[1] The interpretation of the slope value provides insight into the nature of the scaling: a slope signifies superlinear scaling, such as accelerating growth where output increases faster than input (e.g., certain perceptual responses to stimuli intensity).[22] A slope indicates linear scaling, proportional to the input. A slope denotes sublinear scaling, often associated with diminishing returns or compressive effects (e.g., metabolic rates in biology).[22] The slope is typically obtained through least-squares regression on the log-transformed data, minimizing the sum of squared residuals in the logarithmic space to estimate both the exponent (slope) and the prefactor (from the intercept ).[15] Uncertainties in the slope are assessed by fitting maximum and minimum lines consistent with the data points, providing confidence intervals for the exponent.[15] The log transformation affects variance structure, as it assumes multiplicative (log-normal) errors in the original data, which manifest as heteroscedasticity in absolute terms but become homoscedastic on the log scale.[23] This assumption stabilizes variance for datasets with proportional errors, common in power-law contexts, but requires verification to ensure the transformed residuals meet linearity and constant variance criteria.[7]Function Recovery from Plot
In a log-log plot using base-10 logarithms, the transformation linearizes power-law relationships of the form , where the plotted line follows . To recover the original function, identify the slope and y-intercept from the linear fit, then exponentiate to obtain .[24] The recovery process begins by fitting a straight line to the log-log data, yielding parameters and . Next, compute the prefactor , which reverses the logarithmic shift on the intercept. Finally, reconstruct the function by substituting back, ensuring the domain and range match the original data's scales to avoid extrapolation errors. This inversion preserves the multiplicative nature of the power law, as the logarithmic transformation is monotonic and bijective for positive values.[25] For data exhibiting non-linear trends on the log-log plot, such as deviations from a single straight line, model the relationship as a piecewise power law. Identify breakpoints where the slope changes—often visually or via statistical tests—and fit separate linear segments to each interval. The recovered function then consists of concatenated power laws, for in the -th segment, capturing transitions like scale-dependent behaviors in physical systems.[26] Consider a numerical example: a log-log plot shows a straight line passing through the points and . The slope is . Using the first point, the intercept is . Thus, the original function recovers as . Verifying with the second point: , confirming the fit.[27]Area Under Log-Log Curves
In log-log plots, a straight-line segment between points and (where or and or ) corresponds to a power-law relationship in the original coordinates, with slope given by .[1] The area under this curve, , represents quantities such as total energy or flux in physical systems exhibiting power-law behavior, and it can be obtained by transforming back from the logarithmic scale via a change of variables. To derive the area, substitute the power-law form into the integral. Let , so and (assuming natural log for generality; base-10 follows analogously). Then , yielding for . Since and , it follows that and , simplifying to This expression avoids explicit computation of and directly uses the endpoint coordinates from the log-log plot. An alternative derivation proceeds by integration by parts on the transformed coordinates: set and , so and , leading to the same antiderivative after evaluating boundaries.[28] When , the denominator vanishes, and the power-law form is . The integral becomes since . This logarithmic form arises in scenarios like inverse-square decay processes. For example, in the decay of kinetic energy in granular flows or turbulent systems following Haff's law (, so ), the total energy dissipated over time is computed as the area under the log-log energy-time curve using the general formula, providing insight into long-term dissipation rates.[29]Statistical Methods
Log-Log Linear Regression
Log-log linear regression is a statistical method employed to estimate parameters in models exhibiting power-law relationships, where the response variable scales as a power of the predictor. The approach transforms both variables to their logarithms, linearizing the nonlinear power-law form for application of ordinary least squares (OLS). Specifically, the model is expressed as , where is the error term assumed to be normally distributed with mean zero and constant variance. This formulation is mathematically equivalent to the multiplicative power-law model , with denoting the retransformation of the additive log-space error to the original scale.[30] Applying OLS to the log-transformed data yields unbiased and consistent estimates of the parameters under the standard assumptions of linearity, homoscedasticity, and independence in the logged space. The estimated coefficient corresponds to the slope in the log-log plot and quantifies the constant elasticity between and , while serves as the log-intercept, from which the scaling factor can be derived. This method is particularly advantageous for data spanning multiple orders of magnitude, as the logarithmic scale compresses extreme values and stabilizes variance, facilitating more reliable parameter estimation compared to direct nonlinear fitting.[31] A key consideration in log-log regression is the retransformation bias arising from the log transformation, which converts additive errors in log space to multiplicative errors in the original scale, leading to biased predictions when simply exponentiating fitted values. This bias systematically underestimates the expected value of unless corrected. The smearing estimate, a nonparametric retransformation technique, addresses this by computing the average of across the residuals from the OLS fit, then multiplying the naive exponentiated prediction by this average to obtain an unbiased estimate of . Proposed by Duan in 1983, this method performs well without assuming a specific error distribution and is especially effective for empirical applications involving heteroscedastic or non-normal errors.[32] Implementations of log-log linear regression are readily available in major statistical software environments. In R, the baselm() function can be used after logarithmic transformation of the variables, while in Python, the statsmodels library supports OLS estimation on transformed data through its OLS class. These tools enable straightforward application without requiring specialized nonlinear solvers.