Seismic analysis
View on Wikipedia
Seismic analysis is a subset of structural analysis and is the calculation of the response of a building (or nonbuilding) structure to earthquakes. It is part of the process of structural design, earthquake engineering or structural assessment and retrofit (see structural engineering) in regions where earthquakes are prevalent.
As seen in the figure, a building has the potential to 'wave' back and forth during an earthquake (or even a severe wind storm). This is called the 'fundamental mode', and is the lowest frequency of building response. Most buildings, however, have higher modes of response, which are uniquely activated during earthquakes. The figure just shows the second mode, but there are higher 'shimmy' (abnormal vibration) modes. Nevertheless, the first and second modes tend to cause the most damage in most cases.
The earliest provisions for seismic resistance were the requirement to design for a lateral force equal to a proportion of the building weight (applied at each floor level). This approach was adopted in the appendix of the 1927 Uniform Building Code (UBC), which was used on the west coast of the United States. It later became clear that the dynamic properties of the structure affected the loads generated during an earthquake. In the Los Angeles County Building Code of 1943 a provision to vary the load based on the number of floor levels was adopted (based on research carried out at Caltech in collaboration with Stanford University and the United States Coast and Geodetic Survey, which started in 1937). The concept of "response spectra" was developed in the 1930s, but it wasn't until 1952 that a joint committee of the San Francisco Section of the ASCE and the Structural Engineers Association of Northern California (SEAONC) proposed using the building period (the inverse of the frequency) to determine lateral forces.[1]
The University of California, Berkeley was an early base for computer-based seismic analysis of structures, led by Professor Ray Clough (who coined the term finite element.[2] Students included Ed Wilson, who went on to write the program SAP in 1970, an early "finite element analysis" program.[3]
Earthquake engineering has developed a lot since the early days, and some of the more complex designs now use special earthquake protective elements either just in the foundation (base isolation) or distributed throughout the structure. Analyzing these types of structures requires specialized explicit finite element computer code, which divides time into very small slices and models the actual physics, much like common video games often have "physics engines". Very large and complex buildings can be modeled in this way (such as the Osaka International Convention Center).
Structural analysis methods can be divided into the following five categories.
Equivalent static analysis
[edit]This approach defines a series of forces acting on a building to represent the effect of earthquake ground motion, typically defined by a seismic design response spectrum. It assumes that the building responds in its fundamental mode. For this to be true, the building must be low-rise and must not twist significantly when the ground moves. The response is read from a design response spectrum, given the natural frequency of the building (either calculated or defined by the building code). The applicability of this method is extended in many building codes by applying factors to account for higher buildings with some higher modes, and for low levels of twisting. To account for effects due to "yielding" of the structure, many codes apply modification factors that reduce the design forces (e.g. force reduction factors).[4]
Response spectrum analysis
[edit]This approach permits the multiple modes of response of a building to be taken into account (in the frequency domain). This is required in many building codes for all except very simple or very complex structures. The response of a structure can be defined as a combination of many special shapes (modes) that in a vibrating string correspond to the "harmonics". Computer analysis can be used to determine these modes for a structure. For each mode, a response is read from the design spectrum, based on the modal frequency and the modal mass, and they are then combined to provide an estimate of the total response of the structure. In this we have to calculate the magnitude of forces in all directions i.e. X, Y & Z and then see the effects on the building. Combination methods include the following:
- absolute – peak values are added together
- square root of the sum of the squares (SRSS)
- complete quadratic combination (CQC) – a method that is an improvement on SRSS for closely spaced modes
The result of a response spectrum analysis using the response spectrum from a ground motion is typically different from that which would be calculated directly from a linear dynamic analysis using that ground motion directly, since phase information is lost in the process of generating the response spectrum.
In cases where structures are either too irregular, too tall or of significance to a community in disaster response, the response spectrum approach is no longer appropriate, and more complex analysis is often required, such as non-linear static analysis or dynamic analysis.
Linear dynamic analysis
[edit]Static procedures are appropriate when higher mode effects are not significant. This is generally true for short, regular buildings. Therefore, for tall buildings, buildings with torsional irregularities, or non-orthogonal systems, a dynamic procedure is required. In the linear dynamic procedure, the building is modelled as a multi-degree-of-freedom (MDOF) system with a linear elastic stiffness matrix and an equivalent viscous damping matrix.
The seismic input is modelled using either modal spectral analysis or time history analysis but in both cases, the corresponding internal forces and displacements are determined using linear elastic analysis. The advantage of these linear dynamic procedures with respect to linear static procedures is that higher modes can be considered. However, they are based on linear elastic response and hence the applicability decreases with increasing nonlinear behaviour, which is approximated by global force reduction factors.
In linear dynamic analysis, the response of the structure to ground motion is calculated in the time domain, and all phase information is therefore maintained. Only linear properties are assumed. The analytical method can use modal decomposition as a means of reducing the degrees of freedom in the analysis.
Nonlinear static analysis
[edit]In general, linear procedures are applicable when the structure is expected to remain nearly elastic for the level of ground motion or when the design results in nearly uniform distribution of nonlinear response throughout the structure. As the performance objective of the structure implies greater inelastic demands, the uncertainty with linear procedures increases to a point that requires a high level of conservatism in demand assumptions and acceptability criteria to avoid unintended performance. Therefore, procedures incorporating inelastic analysis can reduce the uncertainty and conservatism.
This approach is also known as "pushover" analysis. A pattern of forces is applied to a structural model that includes non-linear properties (such as steel yield), and the total force is plotted against a reference displacement to define a capacity curve. This can then be combined with a demand curve (typically in the form of an acceleration-displacement response spectrum (ADRS)). This essentially reduces the problem to a single degree of freedom (SDOF) system.
Nonlinear static procedures use equivalent SDOF structural models and represent seismic ground motion with response spectra. Story drifts and component actions are related subsequently to the global demand parameter by the pushover or capacity curves that are the basis of the non-linear static procedures.
Nonlinear dynamic analysis
[edit]Nonlinear dynamic analysis utilizes the combination of ground motion records with a detailed structural model, therefore is capable of producing results with relatively low uncertainty. In nonlinear dynamic analyses, the detailed structural model subjected to a ground-motion record produces estimates of component deformations for each degree of freedom in the model and the modal responses are combined using schemes such as the square-root-sum-of-squares.
In non-linear dynamic analysis, the non-linear properties of the structure are considered as part of a time domain analysis. This approach is the most rigorous, and is required by some building codes for buildings of unusual configuration or of special importance. However, the calculated response can be very sensitive to the characteristics of the individual ground motion used as seismic input; therefore, several analyses are required using different ground motion records to achieve a reliable estimation of the probabilistic distribution of structural response. Since the properties of the seismic response depend on the intensity, or severity, of the seismic shaking, a comprehensive assessment calls for numerous nonlinear dynamic analyses at various levels of intensity to represent different possible earthquake scenarios. This has led to the emergence of methods like the incremental dynamic analysis.[5]
See also
[edit]- Applied element method
- Earthquake simulation
- Extreme Loading for Structures – seismic analysis software
- Modal analysis using FEM
- OpenSees – analysis software
- Structural dynamics
- Vibration control
References
[edit]- ^ Bozorgnia, Y, Bertero, V, "Earthquake Engineering: From Engineering Seismology to Performance-Based Engineering", CRC Press, 2004.
- ^ "Early Finite Element Research at Berkeley", Wilson, E. and Clough R., presented at the Fifth U.S. National Conference on Computational Mechanics, Aug. 4–6, 1999
- ^ "Historic Developments in the Evolution of Earthquake Engineering", illustrated essays by Robert Reitherman, CUREE, 1997, p12.
- ^ Costa, Joao Domingues (2003). "Standard methods for seismic analyses" (PDF).
{{cite web}}: CS1 maint: url-status (link) - ^ Vamvatsikos D., Cornell C.A. (2002). Incremental Dynamic Analysis. Earthquake Engineering and Structural Dynamics, 31(3): 491–514.
Other sources:
- ^ ASCE. (2000). Pre-standard and Commentary for the Seismic Rehabilitation of Buildings (FEMA-356) (Report No. FEMA 356). Reston, VA: American Society of Civil Engineers prepared for the Federal Emergency Management Agency.
- ^ ATC. (1985). Earthquake Damage Evaluation Data for California (ATC-13) (Report). Redwood, CA: Applied Technology Council.
Seismic analysis
View on GrokipediaOverview and Fundamentals
Definition and Objectives
Seismic analysis is a subset of structural analysis in civil engineering that evaluates the response of buildings, bridges, and other infrastructure to earthquake-induced ground motions, aiming to predict deformations, forces, and accelerations to prevent collapse and limit damage.[4][5] This process involves modeling seismic loads and assessing how structures interact with dynamic forces, ensuring designs incorporate ductility and redundancy to absorb energy without catastrophic failure.[6] The primary objectives of seismic analysis are to safeguard human life, maintain structural integrity during and after earthquakes, and support operational continuity for critical facilities, while complying with established building codes such as ASCE 7 in the United States and Eurocode 8 in Europe.[7][8] These codes outline performance levels, from life safety in moderate events to collapse prevention in severe ones, emphasizing risk reduction through engineered resilience.[9][10] Seismic analysis plays a crucial role in mitigating the devastating impacts of earthquakes, which caused an estimated 1.87 million deaths worldwide in the 20th century alone, highlighting the need to minimize both loss of life and economic disruptions from structural failures.[11] Key terminology includes seismic zones, which delineate regions of elevated earthquake risk based on historical seismicity and fault activity; design response spectra, graphical representations of maximum expected structural responses (such as acceleration) across varying periods for a given site; and base shear, the total horizontal force applied at the structure's base to simulate seismic demands.[12][13][14] These concepts underpin the evaluation of site-specific hazards and guide the proportioning of structural elements.Historical Development
The 1755 Lisbon earthquake, one of the most destructive events in European history, marked a pivotal moment in the scientific study of earthquakes and spurred early innovations in earthquake-resistant construction. The disaster prompted the reconstruction of the city with techniques like the pombaline cage, a wooden lattice framework designed to enhance structural flexibility and reduce collapse risk during shaking.[15] Throughout the 19th century, post-earthquake observations in regions like Italy and Japan documented patterns of structural failure, laying groundwork for empirical design rules, though formal seismic provisions remained limited.[16] The early 20th century saw the emergence of the first seismic building codes. In Japan, the 1923 Great Kanto Earthquake, which killed over 140,000 people, led to the 1924 revision of the Urban Building Law, introducing the world's first national seismic design standard with a minimum horizontal seismic coefficient of 0.1 to ensure structural stability.[17] In the United States, California followed suit in the 1930s; the 1933 Long Beach Earthquake (magnitude 6.4), which caused widespread damage to unreinforced masonry schools and resulted in 120 deaths, prompted the Field Act of 1933. This legislation mandated equivalent static analysis methods for public school buildings, applying lateral forces based on building weight and height to simulate seismic loads, and extended seismic provisions to statewide building codes for the first time.[18] Mid-century advancements focused on more refined analytical tools. Maurice A. Biot developed the response spectrum method in the 1930s and 1940s, first outlined in his 1932 doctoral dissertation and subsequent publications, providing a way to characterize earthquake ground motions and predict maximum structural responses across frequencies—a foundational milestone for later dynamic analyses.[19] The 1960s and 1970s brought the rise of time-history dynamic analysis, enabled by early computers at institutions like the University of California, Berkeley, allowing engineers to model nonlinear structural behavior under actual earthquake records.[20] The 1994 Northridge Earthquake (magnitude 6.7), which exposed limitations in linear models by causing unexpected damage to modern buildings, accelerated the adoption of nonlinear static and dynamic methods to better capture material yielding and ductility.[21] In the 21st century, seismic analysis evolved toward performance-based and probabilistic frameworks. The Federal Emergency Management Agency's FEMA 356 (2000) prest Standard established guidelines for performance-based seismic design and rehabilitation, defining objectives like life safety and collapse prevention under varying hazard levels to guide nonlinear evaluations.[22] Probabilistic seismic hazard analysis, incorporating site-specific ground motion uncertainties, became integral to modern codes like ASCE 7.[23] The 2011 Tohoku Earthquake (magnitude 9.0), while validating Japan's stringent codes by limiting structural collapses, influenced updates to address long-period motions in high-rise designs and enhanced tsunami-resistant provisions in building standards.[24] Key figures in this progression include George W. Housner, whose work on seismic force distributions shaped code development; Nathan M. Newmark, who advanced methods for distributing seismic shears in multistory buildings; and Anil K. Chopra, whose textbooks on structural dynamics provided essential frameworks for earthquake response analysis.[25][26][27]Key Concepts in Structural Dynamics
Structural dynamics forms the foundational framework for understanding how buildings and other structures respond to seismic excitations, such as earthquake ground motions. At its core, this discipline models structures as systems that vibrate under dynamic loads, where the response depends on the system's mass, stiffness, and damping properties. These concepts are essential for seismic analysis, as they enable engineers to predict displacements, velocities, and accelerations that could lead to structural damage or collapse.[28] A single-degree-of-freedom (SDOF) system represents the simplest model in structural dynamics, idealizing a structure as a single mass connected to a fixed base by a spring and damper, with motion constrained to one direction. The equation of motion for an SDOF system subjected to earthquake ground acceleration is given by $ m\ddot{u}(t) + c\dot{u}(t) + ku(t) = -m\ddot{u}_g(t) $, where $ m $ is the mass, $ c $ is the viscous damping coefficient, $ k $ is the stiffness, $ u(t) $ is the relative displacement of the mass with respect to the ground, is the relative velocity, and is the relative acceleration.[29] This equation derives from Newton's second law applied to the free-body diagram of the mass, incorporating the inertial force from ground motion as the external excitation. The natural frequency of the undamped system is , which characterizes the system's inherent oscillation rate, while the damping ratio quantifies the fraction of energy dissipated per cycle relative to the stored elastic energy.[28] For more complex structures, multi-degree-of-freedom (MDOF) systems extend the SDOF model by considering multiple masses interconnected by springs and dampers, allowing for several independent coordinates to describe the motion. The governing equations for a linear MDOF system under uniform ground acceleration are expressed in matrix form as , where , , and are the mass, damping, and stiffness matrices, respectively, is the vector of relative displacements, and is a vector of ones.[30] Modal analysis simplifies the solution of these coupled equations by decomposing the response into contributions from orthogonal vibration modes, each behaving like an independent SDOF system with its own natural frequency and mode shape; this uncoupling relies on the assumption of proportional damping, where can be expressed as a linear combination of and .[31] Key response quantities in structural dynamics include displacement , which measures deformation; velocity , indicating kinetic energy; and acceleration , related to inertial forces that drive member stresses. In seismic contexts, ductility —the ratio of maximum displacement to yield displacement—represents the structure's capacity to undergo inelastic deformation without brittle failure, allowing controlled energy absorption during strong ground shaking. Energy dissipation occurs primarily through hysteretic mechanisms in nonlinear behavior or viscous damping in linear models, where the work done by damping forces reduces the system's vibrational amplitude over time.[32] These concepts are predicated on initial assumptions of linear elasticity, where the restoring force is proportional to displacement () and material behavior remains within the elastic limit, enabling superposition of responses. Viscous damping effects are often modeled using Rayleigh damping, defined as , with coefficients and selected to match target damping ratios at specific modal frequencies, providing a practical approximation for seismic response calculations.Seismic Input and Modeling
Characteristics of Earthquake Ground Motions
Earthquake ground motions are typically recorded as time-series data comprising accelerations in three orthogonal directions: two horizontal components and one vertical component. The horizontal components capture the primary shaking effects on structures, often represented as the geometric mean of the two orthogonal directions to provide an orientation-independent measure, such as GMRotI50. The vertical component, while generally smaller in amplitude (about 50-70% of horizontal), can be significant for certain structures like bridges or dams.[33][34] Key intensity measures derived from these time series include peak ground acceleration (PGA), which quantifies the maximum ground acceleration in units of g (gravitational acceleration); peak ground velocity (PGV), in cm/s, indicating the maximum ground speed; and peak ground displacement (PGD), in cm, representing the maximum ground offset. PGA is most relevant for short-period structures, PGV for mid-rise buildings, and PGD for long-period or flexible systems, though PGD is sensitive to low-frequency filtering and baseline corrections. For example, during the 1994 Northridge earthquake, PGA reached up to 1.78 g at Pacoima Dam, PGV up to 183 cm/s at Rinaldi Receiving Station, and PGD up to 44 cm at various sites.[35][33][34] The duration and frequency content of ground motions characterize the temporal and spectral energy distribution, influencing structural fatigue and cumulative damage. Significant duration is commonly defined as the time interval between 5% and 95% of the cumulative Arias intensity (D5-95), capturing the period of strong shaking, typically ranging from 5-30 seconds for moderate to large events, with longer durations at greater distances or in sedimentary basins. Frequency content varies with source, path, and site effects, often peaking at 1-10 Hz for crustal earthquakes, but extending to lower frequencies (0.1-1 Hz) in near-fault zones. Arias intensity (Ia), a measure of total energy, is given bySelection and Scaling of Ground Motion Records
Ground motion records used in seismic analysis are typically sourced from empirical databases of recorded earthquakes or generated through physics-based simulations. The PEER NGA-West2 database serves as a primary empirical source, containing 21,336 three-component records from shallow crustal earthquakes in active tectonic regimes worldwide, covering magnitude ranges from 3.0 to 7.9 and including events up to 2013.[42][43][44] Synthetic records, derived from physics-based numerical simulations of earthquake rupture and wave propagation, are increasingly employed to fill gaps in empirical data, particularly for rare events or specific site conditions, enabling broadband ground motions up to 8 Hz. Recent advancements include the NGA-West3 project, with an expanded database to be released in 2025, and the PEER-LBNL simulated ground motion database released in 2024, providing broadband simulations up to 10 Hz for global applications.[45][46][47][48] Selection of ground motion records begins with criteria that ensure representativeness of the seismic hazard at the site, including earthquake magnitude, source-to-site distance, site soil conditions characterized by shear-wave velocity in the upper 30 meters (VS30), and fault mechanism. Records are binned into suites based on these parameters to capture variability, with standards like ASCE 7-22 recommending consideration of magnitude, distance, and VS30 to match the conditional distributions from probabilistic seismic hazard analysis (PSHA). For nonlinear response history analysis, suites typically comprise 7 to 11 record pairs (horizontal components) per direction to achieve reliable median response estimates with acceptable dispersion.[49][50] Once selected, records are scaled or modified to align with a target response spectrum that represents the design seismic hazard. Amplitude scaling involves multiplying the entire time series by a constant factor to match the target spectrum at a specific period or over a range, such as the fundamental period of the structure, ensuring the scaled records do not exceed 1.5 to 3 times the median intensity to preserve realistic dynamic characteristics. Spectral matching adjusts both amplitude and phase—often using wavelet transforms or time-domain filters—to achieve a closer fit to the target spectrum over a broader period range (e.g., 0.2T to 1.5T, where T is the structure's period), reducing bias in estimated demands while maintaining the record's duration and nonstationarity. Limits on frequency content alteration prevent unrealistic modifications, with amplitude scaling preferred for simplicity and spectral matching for higher fidelity in critical applications.[51][52] Probabilistic targets for scaling guide the process to reflect site-specific hazards accurately. The uniform hazard spectrum (UHS) provides a conservative target where spectral ordinates have the same exceedance probability, derived from PSHA deaggregation to select records from compatible magnitude-distance bins. For more refined representation, the conditional mean spectrum (CMS) conditions the target on a spectral acceleration at the structure's period while computing mean values at other periods, accounting for spectral shape correlations and reducing overestimation of demands compared to the UHS. This approach, particularly useful for performance-based design, ensures selected and scaled records reflect the expected distribution of ground motions given a conditioning event.[53][54]Static Analysis Methods
Equivalent Static Analysis
Equivalent static analysis, also known as the equivalent lateral force procedure, is a simplified method for estimating seismic demands on structures by converting the dynamic effects of earthquake ground motions into a set of static lateral forces applied to the building. This approach originated over a century ago in early seismic regulations, where structures were designed for lateral forces equivalent to approximately 10% of the building weight, reflecting a rudimentary understanding of inertial forces during earthquakes.[55] It has been a cornerstone of seismic design codes due to its historical use in providing conservative estimates for basic structural configurations.[55] The core of the procedure involves calculating the total base shear $ V $, which represents the seismic force at the foundation level, using the formula $ V = C_s W $, where $ C_s $ is the seismic response coefficient derived from design response spectra in building codes, and $ W $ is the effective seismic weight of the structure, typically comprising the dead load plus a portion of the live load.[56] This base shear is then distributed vertically along the height of the building to determine the lateral forces at each level. The method assumes linear elastic behavior and is particularly suited for preliminary design or when computational resources are limited. Vertical distribution of the base shear follows an inverted triangular load pattern, where forces are higher at the upper levels to approximate the first-mode response of the structure. The force at level $ x $, denoted $ F_x $, is given by $ F_x = \frac{w_x h_x^k}{\sum w_i h_i^k} V $, with $ w_x $ and $ h_x $ as the effective seismic weight and height at level $ x $, respectively, and the exponent $ k $ varying by fundamental period $ T $: $ k = 1 $ for $ T \leq 0.5 $ seconds (linear distribution), linearly interpolating to $ k = 2 $ for $ T \geq 2.5 $ seconds (parabolic distribution).[56] This distribution factor accounts for the increasing moment arm with height, emphasizing forces in taller portions of the structure. In modern codes such as ASCE/SEI 7-22, the seismic response coefficient $ C_s $ is primarily calculated as $ C_s = \frac{S_{DS}}{(R / I_e)} $, where $ S_{DS} $ is the design spectral acceleration for short periods, $ R $ is the response modification factor reflecting the structure's ductility and overstrength, and $ I_e $ is the importance factor.[56] Period-dependent reductions apply, capping $ C_s $ at $ S_{D1} / (R / I_e) $ for longer periods and ensuring a minimum value based on site-specific parameters. The method is limited to regular structures without significant irregularities, with the fundamental period $ T $ not exceeding $ 3.5 T_S $, where $ T_S $ is the long-period transition period from the site response spectrum.[56] The equivalent static analysis relies on key assumptions, including an inverted triangular force pattern that simulates the fundamental mode shape for shear buildings and applicability to low-rise, rigid structures where higher-mode effects are minimal. Its primary advantages include computational simplicity, requiring only static equilibrium checks without time-history integrations, making it ideal for hand calculations or early-stage assessments in design practice. Historically, this method formed the basis of seismic provisions in early 20th-century codes, evolving to incorporate spectral-based coefficients while retaining its role for simpler applications.[55]Nonlinear Static Analysis
Nonlinear static analysis, commonly referred to as pushover analysis, evaluates the seismic performance of structures by applying incrementally increasing lateral loads to a nonlinear model, simulating the effects of yielding and ductility under monotonic loading. This method generates a capacity curve representing the relationship between base shear and roof displacement, providing insight into the structure's nonlinear response and potential failure mechanisms. Unlike linear static approaches, it accounts for material and geometric nonlinearities, making it suitable for performance-based seismic design where inelastic behavior is expected.[57][58] The procedure begins with modeling the structure as a multi-degree-of-freedom (MDOF) system, incorporating gravity loads and applying lateral forces in a predefined pattern, such as an inverted triangular or uniform distribution, until a target displacement or collapse is reached. The resulting pushover curve is often idealized into bilinear or multilinear segments to facilitate analysis, with the effective fundamental period $ T_e $ and participation factor derived from the structure's first-mode shape. To estimate the target displacement $ \delta_t $, the capacity curve is converted to an equivalent single-degree-of-freedom (SDOF) system, using the formulawhere $ S_a $ is the spectral acceleration, $ g $ is gravitational acceleration, $ C_0 $ accounts for the MDOF-to-SDOF transformation (typically 1.0 to 1.2), $ C_1 $ modifies for inelastic displacement amplification (often 1.0 to 1.5 based on ductility and period), $ C_2 $ adjusts for hysteretic degradation (around 1.0 for non-degrading systems), and $ C_3 $ incorporates P-Δ effects (close to 1.0 for stable post-yield behavior). This approach, refined in FEMA 440 from earlier ATC-40 guidelines, enables estimation of maximum roof displacement without full dynamic simulation.[57][58] Nonlinear behavior is modeled using lumped plasticity elements, concentrating inelastic deformations at plastic hinges located at beam-column joints or member ends, with lengths typically spanning 0.5 to 1.0 times the member depth. These hinges are defined by moment-rotation or force-deformation relationships, such as bilinear (elastic-perfectly plastic) or trilinear (with post-yield hardening or degradation) curves, calibrated from experimental data to capture energy dissipation and stiffness reduction. Acceptance criteria for hinge rotations are tied to performance levels, like life safety or collapse prevention, ensuring the model reflects realistic ductile capacity.[57][58] For demand estimation, the pushover capacity curve is transformed into the Acceleration-Displacement Response Spectrum (ADRS) format, plotting spectral acceleration against spectral displacement with lines of constant period for guidance. The seismic demand spectrum, reduced for effective damping based on ductility, is overlaid to form a capacity-demand diagram; the performance point is identified at their intersection, representing the anticipated inelastic displacement and acceleration under design ground motions. This graphical or iterative process highlights the structure's reserve capacity and guides retrofit decisions.[57][58] Despite its practicality, nonlinear static analysis has limitations, as it assumes the response is dominated by the first mode and applies invariant loading patterns, thereby neglecting higher-mode contributions and the time-varying nature of earthquake excitations. These simplifications can lead to inaccuracies in estimating interstory drifts or responses in irregular or torsionally sensitive structures, where dynamic effects may amplify demands beyond static predictions.[57][58]
