Hubbry Logo
Morris methodMorris methodMain
Open search
Morris method
Community hub
Morris method
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Morris method
Morris method
from Wikipedia

In applied statistics, the Morris method for global sensitivity analysis is a so-called one-factor-at-a-time method, meaning that in each run only one input parameter is given a new value. It facilitates a global sensitivity analysis by making a number of local changes at different points of the possible range of input values.

Method's details

[edit]

Elementary effects' distribution

[edit]

The finite distribution of elementary effects associated with the input factor, is obtained by randomly sampling different from , and is denoted by .[1]

Variations

[edit]

In the original work of Morris the two sensitivity measures proposed were respectively the mean, , and the standard deviation, , of . However, choosing Morris has the drawback that, if the distribution contains negative elements, which occurs when the model is non-monotonic, when computing the mean some effects may cancel each other out. Thus, the measure on its own is not reliable for ranking factors in order of importance. It is necessary to consider at the same time the values of and , as a factor with elementary effects of different signs (that cancel each other out) would have a low value of but a considerable value of that avoids underestimating the factors.[1]

[edit]

If the distribution contains negative elements, which occurs when the model is non-monotonic, when computing the mean some effects may cancel each other out. When the goal is to rank factors in order of importance by making use of a single sensitivity measure, scientific advice is to use , which by making use of the absolute value, avoids the occurrence of effects of opposite signs.[1]

In Revised Morris method is used to detect input factors with an important overall influence on the output. is used to detect factors involved in interaction with other factors or whose effect is non-linear.[1]

Method's steps

[edit]

The method starts by sampling a set of start values within the defined ranges of possible values for all input variables and calculating the subsequent model outcome. The second step changes the values for one variable (all other inputs remaining at their start values) and calculates the resulting change in model outcome compared to the first run. Next, the values for another variable are changed (the previous variable is kept at its changed value and all other ones kept at their start values) and the resulting change in model outcome compared to the second run is calculated. This goes on until all input variables are changed. This procedure is repeated times (where is usually taken between 5 and 15), each time with a different set of start values, which leads to a number of runs, where k is the number of input variables. Such number is very efficient compared to more demanding methods for sensitivity analysis.[2]

A sensitivity analysis method widely used to screen factors in models of large dimensionality is the design proposed by Morris.[3] The Morris method deals efficiently with models containing hundreds of input factors without relying on strict assumptions about the model, such as for instance additivity or monotonicity of the model input-output relationship. The Morris method is simple to understand and implement, and its results are easily interpreted. Furthermore, it is economic in the sense that it requires a number of model evaluations that is linear in the number of model factors. The method can be regarded as global as the final measure is obtained by averaging a number of local measures (the elementary effects), computed at different points of the input space.[2]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Morris method is a screening technique for global in computational models, developed by Max D. Morris in 1991 to identify influential input parameters among many in deterministic systems. It employs a one-factor-at-a-time () approach, perturbing individual inputs sequentially across multiple randomized trajectories in the input space to capture both main effects and potential interactions or nonlinearities. The procedure begins by scaling the input domain to a kk-dimensional unit hypercube discretized into a pp-level grid (commonly p=4p=4 or p=8p=8), with perturbations Δ\Delta as multiples of 1/(p1)1/(p-1). Elementary effects are then calculated for each input xix_i at selected base points x\mathbf{x} as finite differences: di(x)=y(x+Δei)y(x)Δd_i(\mathbf{x}) = \frac{y(\mathbf{x} + \Delta \cdot \mathbf{e}_i) - y(\mathbf{x})}{\Delta}, where yy is the model output and ei\mathbf{e}_i is the ii-th . For rr trajectories (typically r=10r=10 to 2020), rr effects per input are generated, yielding distributions summarized by the mean μi\mu_i (or absolute mean μi\mu_i^* to mitigate sign cancellation) and standard deviation σi\sigma_i; large μi\mu_i^* signals overall importance, while elevated σi\sigma_i highlights sensitivity to interactions or nonlinearity. This method's efficiency—requiring approximately r(k+1)r(k+1) model evaluations for kk inputs—makes it ideal for initial screening in high-dimensional models where full variance-based analysis is prohibitive. It assumes no input sparsity, monotonicity, or additivity, enabling robust application across diverse domains including environmental simulations, optimization, and biomedical modeling for uncertainty quantification.

Introduction

Definition and Purpose

The Morris method is a global technique that utilizes randomized one-factor-at-a-time () designs to assess the influence of individual input factors on the output of deterministic computational models. It computes elementary effects—finite differences in model output resulting from small perturbations to a single input while holding others fixed—at multiple sampled points across the input space, providing a distribution of effects for each factor. This approach allows for the evaluation of factor importance without assuming additivity, monotonicity, or specific functional forms of the model. The primary purpose of the Morris method is to screen and rank input factors by their overall influence in preliminary computational experiments, particularly for complex models with a moderate to large number of inputs. By analyzing the and standard deviation of elementary effects, it identifies non-influential parameters that can be fixed or simplified to reduce model complexity, while also flagging factors that exhibit nonlinear behavior or interactions with others. This makes it valuable in applied and modeling fields, such as environmental simulations and design, where understanding key drivers is essential before more detailed analyses. Compared to variance-based global sensitivity methods like Sobol indices, the Morris method is notably efficient for high-dimensional problems, requiring only on the order of (+1) model evaluations—where is the number of sampling trajectories and is the number of inputs—versus thousands or more for Sobol approaches, enabling rapid screening even for computationally expensive models.

Historical Development

The Morris method was introduced by Max D. Morris in in his seminal paper "Factorial Sampling Plans for Preliminary Computational Experiments," published in Technometrics, where it was presented as an efficient one-at-a-time () screening technique for preliminary identification of influential input factors in complex computational models with many variables. Key refinements emerged in subsequent years to enhance the method's robustness, particularly in handling nonlinearities and interactions. In 2007, Francesca Campolongo, Jessica Cariboni, and Andrea Saltelli developed an improved screening design that introduced the μ* measure, which mitigates the cancellation of opposing elementary effects in the original mean metric, providing a more reliable indicator of importance. This update built directly on Morris's framework while improving its discriminatory power for large-scale models. The method's integration into comprehensive sensitivity analysis practices was advanced by Andrea Saltelli and colleagues in their 2004 book Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models, which positioned the Morris approach as a computationally efficient complement to variance-based global methods, emphasizing its role in model screening and uncertainty propagation. During the and , the Morris method gained widespread adoption in environmental and engineering applications, valued for its low computational cost in screening models with dozens of parameters, such as those simulating ecological systems or hydrological processes. By 2025, it has become a standard feature in open-source and commercial software, including Python's SALib library for global and MATLAB's SAFE Toolbox, enabling seamless implementation across interdisciplinary research.

Background Concepts

Sensitivity Analysis Overview

Sensitivity analysis is a technique used to quantify the relationship between uncertainties in model inputs and the resulting variations in model outputs, providing insights into how input parameters influence system behavior. This approach is fundamental for understanding uncertainty propagation, validating model structures, and supporting informed decision-making across disciplines such as and . In these fields, it enables practitioners to trace how input uncertainties—often represented by probability distributions—affect predictions, such as in hydrological models or structural reliability assessments. Sensitivity analysis methods are broadly categorized into local and global types. Local methods evaluate sensitivity by examining the effect of small changes in inputs around a specific nominal point, typically using partial derivatives to approximate the output response. In contrast, global methods assess sensitivity across the entire input space, accounting for parameter interactions, nonlinear effects, and full input distributions, which makes them suitable for complex, nonlinear models. Within global approaches, variance-based techniques, such as those developed by Sobol, decompose the total output variance to attribute contributions from individual inputs and their interactions. Screening methods, exemplified by one-at-a-time (OAT) approaches, focus on rapidly identifying influential parameters by varying inputs sequentially while holding others constant. The importance of sensitivity analysis lies in its ability to identify key input parameters that drive output uncertainty, thereby reducing model complexity and enhancing robustness against input variations. By prioritizing efforts on critical factors and revealing model deficiencies, it aids in verification and processes. Prerequisites for conducting include a basic understanding of to define input distributions and appropriate metrics for evaluating model outputs, ensuring that analyses reflect realistic scenarios.

One-at-a-Time vs. Global Methods

One-at-a-time (OAT) methods in involve systematically perturbing a single input factor while holding all others constant at nominal values, allowing for the direct assessment of individual effects on model outputs. These approaches are computationally inexpensive, typically requiring a number of model evaluations linear in the number of inputs (e.g., 2k evaluations for k inputs in basic designs), and are exemplified by traditional designs or simple derivative-based analyses. However, OAT methods are limited in their ability to detect interactions between factors or nonlinear behaviors unless perturbations are repeated extensively across the input space, often leading to incomplete insights in complex models. In contrast, global methods explore the entire range of input factors simultaneously, accounting for their distributions, variances, and interactions to provide a more comprehensive evaluation of . Techniques such as variance-based approaches, including Sobol indices, decompose output variance into contributions from individual factors and their interactions using strategies like sampling, which can require thousands of model runs (e.g., N(k+2) evaluations where N is often 1000 or more for reliable estimates). While these methods capture higher-order effects and non-monotonic relationships effectively, their high computational demand makes them impractical for initial screening in models with many inputs. The Morris method occupies a niche as an -based global screening technique, achieving broader coverage than traditional OAT by generating multiple randomized trajectories across the input space to approximate overall factor influences and detect potential nonlinearities or interactions. This design requires r(k+1) evaluations (with r trajectories), offering a balance of efficiency and informativeness suitable for identifying key factors before applying more resource-intensive global methods like Sobol . Nonetheless, it may overlook subtle higher-order interactions, positioning it primarily as a preliminary tool rather than a substitute for full global assessment.

Mathematical Formulation

Elementary Effects

The elementary effect in the Morris method quantifies the sensitivity of a model's output to a small perturbation in a single input factor, while keeping all other inputs constant. For a model y=f(x)y = f(\mathbf{x}), where x=(x1,,xk)\mathbf{x} = (x_1, \dots, x_k) is the vector of kk input factors and yy is the scalar output, the elementary effect EEi(x)EE_i(\mathbf{x}) for the ii-th input factor xix_i is defined as EEi(x)=f(x+Δei)f(x)Δ,EE_i(\mathbf{x}) = \frac{f(\mathbf{x} + \Delta \mathbf{e}_i) - f(\mathbf{x})}{\Delta}, where Δ\Delta is a finite increment, ei\mathbf{e}_i is the ii-th unit vector in Rk\mathbb{R}^k, and x+Δei\mathbf{x} + \Delta \mathbf{e}_i must remain within the input domain Ω\Omega. This one-at-a-time perturbation captures the local gradient-like change in the output attributable to xix_i, serving as the foundational unit for global sensitivity assessment. The perturbation size Δ\Delta is typically set as Δ=p2(p1)\Delta = \frac{p}{2(p-1)} for even pp, which is a multiple of the grid spacing 1p1\frac{1}{p-1}, ensuring that the perturbed point stays on the grid. The input domain Ω\Omega is often discretized as a kk-dimensional pp-level grid within the unit hypercube [0,1]k[0, 1]^k, with factor values at {0,1p1,,1}\{0, \frac{1}{p-1}, \dots, 1\}, to approximate continuous inputs or directly represent discrete ones. Elementary effects are computed at multiple randomly selected base points x\mathbf{x} across Ω\Omega, generating a sample from the distribution FiF_i of EEiEE_i values for each factor ii; this sampling, often repeated rr times per factor, accounts for nonlinearity and interactions by exploring variability in the effects across the input space. The method assumes that inputs can be scaled to the unit and discretized on for evaluation, making it applicable to both continuous distributions (via grid approximation) and inherently discrete factors. While the original formulation addresses scalar-valued models, the elementary effect concept extends naturally to vector-valued outputs by applying the component-wise to each output .

Sensitivity Measures

In the Morris method, the elementary effects computed for each input factor are aggregated into sensitivity indices to quantify and rank the importance of factors in influencing the model output. These indices provide a screening tool for identifying influential factors, particularly in high-dimensional models where computational efficiency is crucial. The primary measures are derived from the distribution of elementary effects EEi(x)EE_i(\mathbf{x}) for factor ii, estimated empirically from multiple trajectories in the input space. The mean elementary effect, denoted μi=E[EEi(x)]\mu_i = E[EE_i(\mathbf{x})], represents the expected change in output per unit change in input xix_i, averaged over the input space. It serves as a measure of the overall influence of factor ii, with larger absolute values indicating greater importance. However, μi\mu_i can suffer from sign cancellation when positive and negative effects offset each other, particularly in nonlinear or non-monotonic models, potentially underestimating a factor's significance. In practice, μi\mu_i is approximated from rr samples as μi1rj=1rEEi(j)(x(j))\mu_i \approx \frac{1}{r} \sum_{j=1}^r EE_i^{(j)}(\mathbf{x}^{(j)}). The standard deviation, σi=Var[EEi(x)]\sigma_i = \sqrt{\text{Var}[EE_i(\mathbf{x})]}
Add your contribution
Related Hubs
User Avatar
No comments yet.