Hubbry Logo
search
logo
2030059

Seismic analysis

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
First and second modes of building seismic response

Seismic analysis is a subset of structural analysis and is the calculation of the response of a building (or nonbuilding) structure to earthquakes. It is part of the process of structural design, earthquake engineering or structural assessment and retrofit (see structural engineering) in regions where earthquakes are prevalent.

As seen in the figure, a building has the potential to 'wave' back and forth during an earthquake (or even a severe wind storm). This is called the 'fundamental mode', and is the lowest frequency of building response. Most buildings, however, have higher modes of response, which are uniquely activated during earthquakes. The figure just shows the second mode, but there are higher 'shimmy' (abnormal vibration) modes. Nevertheless, the first and second modes tend to cause the most damage in most cases.

The earliest provisions for seismic resistance were the requirement to design for a lateral force equal to a proportion of the building weight (applied at each floor level). This approach was adopted in the appendix of the 1927 Uniform Building Code (UBC), which was used on the west coast of the United States. It later became clear that the dynamic properties of the structure affected the loads generated during an earthquake. In the Los Angeles County Building Code of 1943 a provision to vary the load based on the number of floor levels was adopted (based on research carried out at Caltech in collaboration with Stanford University and the United States Coast and Geodetic Survey, which started in 1937). The concept of "response spectra" was developed in the 1930s, but it wasn't until 1952 that a joint committee of the San Francisco Section of the ASCE and the Structural Engineers Association of Northern California (SEAONC) proposed using the building period (the inverse of the frequency) to determine lateral forces.[1]

The University of California, Berkeley was an early base for computer-based seismic analysis of structures, led by Professor Ray Clough (who coined the term finite element.[2] Students included Ed Wilson, who went on to write the program SAP in 1970, an early "finite element analysis" program.[3]

Earthquake engineering has developed a lot since the early days, and some of the more complex designs now use special earthquake protective elements either just in the foundation (base isolation) or distributed throughout the structure. Analyzing these types of structures requires specialized explicit finite element computer code, which divides time into very small slices and models the actual physics, much like common video games often have "physics engines". Very large and complex buildings can be modeled in this way (such as the Osaka International Convention Center).

Structural analysis methods can be divided into the following five categories.

Equivalent static analysis

[edit]

This approach defines a series of forces acting on a building to represent the effect of earthquake ground motion, typically defined by a seismic design response spectrum. It assumes that the building responds in its fundamental mode. For this to be true, the building must be low-rise and must not twist significantly when the ground moves. The response is read from a design response spectrum, given the natural frequency of the building (either calculated or defined by the building code). The applicability of this method is extended in many building codes by applying factors to account for higher buildings with some higher modes, and for low levels of twisting. To account for effects due to "yielding" of the structure, many codes apply modification factors that reduce the design forces (e.g. force reduction factors).[4]

Response spectrum analysis

[edit]

This approach permits the multiple modes of response of a building to be taken into account (in the frequency domain). This is required in many building codes for all except very simple or very complex structures. The response of a structure can be defined as a combination of many special shapes (modes) that in a vibrating string correspond to the "harmonics". Computer analysis can be used to determine these modes for a structure. For each mode, a response is read from the design spectrum, based on the modal frequency and the modal mass, and they are then combined to provide an estimate of the total response of the structure. In this we have to calculate the magnitude of forces in all directions i.e. X, Y & Z and then see the effects on the building. Combination methods include the following:

  • absolute – peak values are added together
  • square root of the sum of the squares (SRSS)
  • complete quadratic combination (CQC) – a method that is an improvement on SRSS for closely spaced modes

The result of a response spectrum analysis using the response spectrum from a ground motion is typically different from that which would be calculated directly from a linear dynamic analysis using that ground motion directly, since phase information is lost in the process of generating the response spectrum.

In cases where structures are either too irregular, too tall or of significance to a community in disaster response, the response spectrum approach is no longer appropriate, and more complex analysis is often required, such as non-linear static analysis or dynamic analysis.

Linear dynamic analysis

[edit]

Static procedures are appropriate when higher mode effects are not significant. This is generally true for short, regular buildings. Therefore, for tall buildings, buildings with torsional irregularities, or non-orthogonal systems, a dynamic procedure is required. In the linear dynamic procedure, the building is modelled as a multi-degree-of-freedom (MDOF) system with a linear elastic stiffness matrix and an equivalent viscous damping matrix.

The seismic input is modelled using either modal spectral analysis or time history analysis but in both cases, the corresponding internal forces and displacements are determined using linear elastic analysis. The advantage of these linear dynamic procedures with respect to linear static procedures is that higher modes can be considered. However, they are based on linear elastic response and hence the applicability decreases with increasing nonlinear behaviour, which is approximated by global force reduction factors.

In linear dynamic analysis, the response of the structure to ground motion is calculated in the time domain, and all phase information is therefore maintained. Only linear properties are assumed. The analytical method can use modal decomposition as a means of reducing the degrees of freedom in the analysis.

Nonlinear static analysis

[edit]

In general, linear procedures are applicable when the structure is expected to remain nearly elastic for the level of ground motion or when the design results in nearly uniform distribution of nonlinear response throughout the structure. As the performance objective of the structure implies greater inelastic demands, the uncertainty with linear procedures increases to a point that requires a high level of conservatism in demand assumptions and acceptability criteria to avoid unintended performance. Therefore, procedures incorporating inelastic analysis can reduce the uncertainty and conservatism.

This approach is also known as "pushover" analysis. A pattern of forces is applied to a structural model that includes non-linear properties (such as steel yield), and the total force is plotted against a reference displacement to define a capacity curve. This can then be combined with a demand curve (typically in the form of an acceleration-displacement response spectrum (ADRS)). This essentially reduces the problem to a single degree of freedom (SDOF) system.

Nonlinear static procedures use equivalent SDOF structural models and represent seismic ground motion with response spectra. Story drifts and component actions are related subsequently to the global demand parameter by the pushover or capacity curves that are the basis of the non-linear static procedures.

Nonlinear dynamic analysis

[edit]

Nonlinear dynamic analysis utilizes the combination of ground motion records with a detailed structural model, therefore is capable of producing results with relatively low uncertainty. In nonlinear dynamic analyses, the detailed structural model subjected to a ground-motion record produces estimates of component deformations for each degree of freedom in the model and the modal responses are combined using schemes such as the square-root-sum-of-squares.

In non-linear dynamic analysis, the non-linear properties of the structure are considered as part of a time domain analysis. This approach is the most rigorous, and is required by some building codes for buildings of unusual configuration or of special importance. However, the calculated response can be very sensitive to the characteristics of the individual ground motion used as seismic input; therefore, several analyses are required using different ground motion records to achieve a reliable estimation of the probabilistic distribution of structural response. Since the properties of the seismic response depend on the intensity, or severity, of the seismic shaking, a comprehensive assessment calls for numerous nonlinear dynamic analyses at various levels of intensity to represent different possible earthquake scenarios. This has led to the emergence of methods like the incremental dynamic analysis.[5]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Seismic analysis is the process of evaluating a structure's response to earthquake-induced ground motions, a critical subset of structural engineering that calculates dynamic forces, deformations, and accelerations to ensure safety and performance during seismic events.[1] It encompasses the assessment of how buildings, bridges, and other infrastructure interact with seismic waves, focusing on preventing collapse and minimizing damage through precise modeling of material behavior and site-specific hazards.[2] The primary goal of seismic analysis is to achieve life safety by limiting collapse risk to approximately 1% over a 50-year period for typical structures, while also supporting operational continuity for critical facilities like hospitals.[1] Key principles include accounting for ground shaking intensity, which varies by soil type, fault proximity, and return periods of 475 to 2,475 years for design earthquakes, as well as structural ductility—the ability to undergo inelastic deformations without failure.[1] Analysis also considers irregularities such as soft stories or torsional effects that amplify vulnerabilities, integrating geotechnical factors like soil-structure interaction to predict realistic responses.[2] Common methods range from simplified equivalent lateral force procedures, which distribute static loads based on base shear (V = C_s W, where C_s is the seismic coefficient and W is the structure's weight), to advanced dynamic approaches.[1] Linear methods assume elastic behavior for initial designs, while nonlinear techniques, including pushover analysis and response history analysis, capture damage progression using time-history ground motions scaled to site hazards, requiring a minimum of 11 sets of records for statistical reliability.[2][3] These analyses adhere to standards like ASCE/SEI 7-22, which define performance levels such as immediate occupancy and collapse prevention, enabling engineers to balance cost, risk, and resilience in seismic-prone regions.[1]

Overview and Fundamentals

Definition and Objectives

Seismic analysis is a subset of structural analysis in civil engineering that evaluates the response of buildings, bridges, and other infrastructure to earthquake-induced ground motions, aiming to predict deformations, forces, and accelerations to prevent collapse and limit damage.[4][5] This process involves modeling seismic loads and assessing how structures interact with dynamic forces, ensuring designs incorporate ductility and redundancy to absorb energy without catastrophic failure.[6] The primary objectives of seismic analysis are to safeguard human life, maintain structural integrity during and after earthquakes, and support operational continuity for critical facilities, while complying with established building codes such as ASCE 7 in the United States and Eurocode 8 in Europe.[7][8] These codes outline performance levels, from life safety in moderate events to collapse prevention in severe ones, emphasizing risk reduction through engineered resilience.[9][10] Seismic analysis plays a crucial role in mitigating the devastating impacts of earthquakes, which caused an estimated 1.87 million deaths worldwide in the 20th century alone, highlighting the need to minimize both loss of life and economic disruptions from structural failures.[11] Key terminology includes seismic zones, which delineate regions of elevated earthquake risk based on historical seismicity and fault activity; design response spectra, graphical representations of maximum expected structural responses (such as acceleration) across varying periods for a given site; and base shear, the total horizontal force applied at the structure's base to simulate seismic demands.[12][13][14] These concepts underpin the evaluation of site-specific hazards and guide the proportioning of structural elements.

Historical Development

The 1755 Lisbon earthquake, one of the most destructive events in European history, marked a pivotal moment in the scientific study of earthquakes and spurred early innovations in earthquake-resistant construction. The disaster prompted the reconstruction of the city with techniques like the pombaline cage, a wooden lattice framework designed to enhance structural flexibility and reduce collapse risk during shaking.[15] Throughout the 19th century, post-earthquake observations in regions like Italy and Japan documented patterns of structural failure, laying groundwork for empirical design rules, though formal seismic provisions remained limited.[16] The early 20th century saw the emergence of the first seismic building codes. In Japan, the 1923 Great Kanto Earthquake, which killed over 140,000 people, led to the 1924 revision of the Urban Building Law, introducing the world's first national seismic design standard with a minimum horizontal seismic coefficient of 0.1 to ensure structural stability.[17] In the United States, California followed suit in the 1930s; the 1933 Long Beach Earthquake (magnitude 6.4), which caused widespread damage to unreinforced masonry schools and resulted in 120 deaths, prompted the Field Act of 1933. This legislation mandated equivalent static analysis methods for public school buildings, applying lateral forces based on building weight and height to simulate seismic loads, and extended seismic provisions to statewide building codes for the first time.[18] Mid-century advancements focused on more refined analytical tools. Maurice A. Biot developed the response spectrum method in the 1930s and 1940s, first outlined in his 1932 doctoral dissertation and subsequent publications, providing a way to characterize earthquake ground motions and predict maximum structural responses across frequencies—a foundational milestone for later dynamic analyses.[19] The 1960s and 1970s brought the rise of time-history dynamic analysis, enabled by early computers at institutions like the University of California, Berkeley, allowing engineers to model nonlinear structural behavior under actual earthquake records.[20] The 1994 Northridge Earthquake (magnitude 6.7), which exposed limitations in linear models by causing unexpected damage to modern buildings, accelerated the adoption of nonlinear static and dynamic methods to better capture material yielding and ductility.[21] In the 21st century, seismic analysis evolved toward performance-based and probabilistic frameworks. The Federal Emergency Management Agency's FEMA 356 (2000) prest Standard established guidelines for performance-based seismic design and rehabilitation, defining objectives like life safety and collapse prevention under varying hazard levels to guide nonlinear evaluations.[22] Probabilistic seismic hazard analysis, incorporating site-specific ground motion uncertainties, became integral to modern codes like ASCE 7.[23] The 2011 Tohoku Earthquake (magnitude 9.0), while validating Japan's stringent codes by limiting structural collapses, influenced updates to address long-period motions in high-rise designs and enhanced tsunami-resistant provisions in building standards.[24] Key figures in this progression include George W. Housner, whose work on seismic force distributions shaped code development; Nathan M. Newmark, who advanced methods for distributing seismic shears in multistory buildings; and Anil K. Chopra, whose textbooks on structural dynamics provided essential frameworks for earthquake response analysis.[25][26][27]

Key Concepts in Structural Dynamics

Structural dynamics forms the foundational framework for understanding how buildings and other structures respond to seismic excitations, such as earthquake ground motions. At its core, this discipline models structures as systems that vibrate under dynamic loads, where the response depends on the system's mass, stiffness, and damping properties. These concepts are essential for seismic analysis, as they enable engineers to predict displacements, velocities, and accelerations that could lead to structural damage or collapse.[28] A single-degree-of-freedom (SDOF) system represents the simplest model in structural dynamics, idealizing a structure as a single mass connected to a fixed base by a spring and damper, with motion constrained to one direction. The equation of motion for an SDOF system subjected to earthquake ground acceleration u¨g(t)\ddot{u}_g(t) is given by $ m\ddot{u}(t) + c\dot{u}(t) + ku(t) = -m\ddot{u}_g(t) $, where $ m $ is the mass, $ c $ is the viscous damping coefficient, $ k $ is the stiffness, $ u(t) $ is the relative displacement of the mass with respect to the ground, u˙(t)\dot{u}(t) is the relative velocity, and u¨(t)\ddot{u}(t) is the relative acceleration.[29] This equation derives from Newton's second law applied to the free-body diagram of the mass, incorporating the inertial force from ground motion as the external excitation. The natural frequency of the undamped system is ωn=k/m\omega_n = \sqrt{k/m}, which characterizes the system's inherent oscillation rate, while the damping ratio ζ=c/(2km)\zeta = c / (2\sqrt{km}) quantifies the fraction of energy dissipated per cycle relative to the stored elastic energy.[28] For more complex structures, multi-degree-of-freedom (MDOF) systems extend the SDOF model by considering multiple masses interconnected by springs and dampers, allowing for several independent coordinates to describe the motion. The governing equations for a linear MDOF system under uniform ground acceleration are expressed in matrix form as Mu¨(t)+Cu˙(t)+Ku(t)=M1u¨g(t)\mathbf{M}\ddot{\mathbf{u}}(t) + \mathbf{C}\dot{\mathbf{u}}(t) + \mathbf{K}\mathbf{u}(t) = -\mathbf{M}\mathbf{1}\ddot{u}_g(t), where M\mathbf{M}, C\mathbf{C}, and K\mathbf{K} are the mass, damping, and stiffness matrices, respectively, u(t)\mathbf{u}(t) is the vector of relative displacements, and 1\mathbf{1} is a vector of ones.[30] Modal analysis simplifies the solution of these coupled equations by decomposing the response into contributions from orthogonal vibration modes, each behaving like an independent SDOF system with its own natural frequency and mode shape; this uncoupling relies on the assumption of proportional damping, where C\mathbf{C} can be expressed as a linear combination of M\mathbf{M} and K\mathbf{K}.[31] Key response quantities in structural dynamics include displacement u(t)u(t), which measures deformation; velocity u˙(t)\dot{u}(t), indicating kinetic energy; and acceleration u¨(t)\ddot{u}(t), related to inertial forces that drive member stresses. In seismic contexts, ductility μ=umax/uy\mu = u_{\max}/u_y—the ratio of maximum displacement to yield displacement—represents the structure's capacity to undergo inelastic deformation without brittle failure, allowing controlled energy absorption during strong ground shaking. Energy dissipation occurs primarily through hysteretic mechanisms in nonlinear behavior or viscous damping in linear models, where the work done by damping forces reduces the system's vibrational amplitude over time.[32] These concepts are predicated on initial assumptions of linear elasticity, where the restoring force is proportional to displacement (f=kuf = ku) and material behavior remains within the elastic limit, enabling superposition of responses. Viscous damping effects are often modeled using Rayleigh damping, defined as C=αM+βK\mathbf{C} = \alpha \mathbf{M} + \beta \mathbf{K}, with coefficients α\alpha and β\beta selected to match target damping ratios at specific modal frequencies, providing a practical approximation for seismic response calculations.

Seismic Input and Modeling

Characteristics of Earthquake Ground Motions

Earthquake ground motions are typically recorded as time-series data comprising accelerations in three orthogonal directions: two horizontal components and one vertical component. The horizontal components capture the primary shaking effects on structures, often represented as the geometric mean of the two orthogonal directions to provide an orientation-independent measure, such as GMRotI50. The vertical component, while generally smaller in amplitude (about 50-70% of horizontal), can be significant for certain structures like bridges or dams.[33][34] Key intensity measures derived from these time series include peak ground acceleration (PGA), which quantifies the maximum ground acceleration in units of g (gravitational acceleration); peak ground velocity (PGV), in cm/s, indicating the maximum ground speed; and peak ground displacement (PGD), in cm, representing the maximum ground offset. PGA is most relevant for short-period structures, PGV for mid-rise buildings, and PGD for long-period or flexible systems, though PGD is sensitive to low-frequency filtering and baseline corrections. For example, during the 1994 Northridge earthquake, PGA reached up to 1.78 g at Pacoima Dam, PGV up to 183 cm/s at Rinaldi Receiving Station, and PGD up to 44 cm at various sites.[35][33][34] The duration and frequency content of ground motions characterize the temporal and spectral energy distribution, influencing structural fatigue and cumulative damage. Significant duration is commonly defined as the time interval between 5% and 95% of the cumulative Arias intensity (D5-95), capturing the period of strong shaking, typically ranging from 5-30 seconds for moderate to large events, with longer durations at greater distances or in sedimentary basins. Frequency content varies with source, path, and site effects, often peaking at 1-10 Hz for crustal earthquakes, but extending to lower frequencies (0.1-1 Hz) in near-fault zones. Arias intensity (Ia), a measure of total energy, is given by
Ia=π2g0a2(t)dt I_a = \frac{\pi}{2g} \int_0^\infty a^2(t) \, dt
where a(t)a(t) is the acceleration time series and gg is gravitational acceleration; values can exceed 1 m/s for destructive shaking, as seen in the 1995 Kobe earthquake where Ia reached 2.5 m/s near the fault.[36][37][38] Near-fault ground motions exhibit distinct characteristics due to source directivity and fling step effects. Forward directivity arises when rupture propagates toward the site at shear-wave velocity, producing a high-amplitude, long-period velocity pulse (periods 0.4-20 s) in the fault-normal direction, shortening duration but amplifying spectral ordinates at the pulse period. Fling step, conversely, causes a permanent, one-sided displacement (up to 1-2 m for M7+ events) in the fault-parallel direction due to fault slip, with shorter periods (1-5 s) and less pronounced velocity pulses. These effects, observed in events like the 1999 Chi-Chi earthquake (fling step up to 8 m PGD), can increase demands on structures by 2-3 times compared to far-field motions.[36][36] Attenuation and site effects modify ground motions with distance and local geology, quantified through ground motion prediction equations (GMPEs). These empirical models predict median motions and aleatory variability as functions of magnitude, distance, fault type, and site shear-wave velocity (VS30). The seminal Boore, Joyner, and Fumal (1997) GMPE provided equations for horizontal PGA, PGV, and 5%-damped spectral acceleration (Sa(T)) for western U.S. crustal earthquakes (M 5-7.5, distances 10-150 km), emphasizing rock-site conditions. Updated in the NGA-West2 project (e.g., Boore et al., 2014), these incorporate basin depth, hanging-wall effects, and extended ranges (M 3-8, distances 0-300 km), with VS30 scaling amplifying motions by up to 50% on soft soils (VS30 < 360 m/s) versus rock.[39][40][35] Spectral acceleration Sa(T) serves as a primary intensity measure, defined as the maximum acceleration response of a single-degree-of-freedom oscillator (5% damping) at period T, approximating the force a structure experiences relative to its weight. It scales with earthquake size and inversely with distance, with values often 0.2-1.0 g for design levels at T=0.2-1.0 s; for instance, Sa(0.3 s) exceeded 2 g in the 2011 Tohoku aftershocks. Arias intensity complements Sa(T) by integrating energy over all frequencies, aiding in assessing cumulative effects.[41][37]

Selection and Scaling of Ground Motion Records

Ground motion records used in seismic analysis are typically sourced from empirical databases of recorded earthquakes or generated through physics-based simulations. The PEER NGA-West2 database serves as a primary empirical source, containing 21,336 three-component records from shallow crustal earthquakes in active tectonic regimes worldwide, covering magnitude ranges from 3.0 to 7.9 and including events up to 2013.[42][43][44] Synthetic records, derived from physics-based numerical simulations of earthquake rupture and wave propagation, are increasingly employed to fill gaps in empirical data, particularly for rare events or specific site conditions, enabling broadband ground motions up to 8 Hz. Recent advancements include the NGA-West3 project, with an expanded database to be released in 2025, and the PEER-LBNL simulated ground motion database released in 2024, providing broadband simulations up to 10 Hz for global applications.[45][46][47][48] Selection of ground motion records begins with criteria that ensure representativeness of the seismic hazard at the site, including earthquake magnitude, source-to-site distance, site soil conditions characterized by shear-wave velocity in the upper 30 meters (VS30), and fault mechanism. Records are binned into suites based on these parameters to capture variability, with standards like ASCE 7-22 recommending consideration of magnitude, distance, and VS30 to match the conditional distributions from probabilistic seismic hazard analysis (PSHA). For nonlinear response history analysis, suites typically comprise 7 to 11 record pairs (horizontal components) per direction to achieve reliable median response estimates with acceptable dispersion.[49][50] Once selected, records are scaled or modified to align with a target response spectrum that represents the design seismic hazard. Amplitude scaling involves multiplying the entire time series by a constant factor to match the target spectrum at a specific period or over a range, such as the fundamental period of the structure, ensuring the scaled records do not exceed 1.5 to 3 times the median intensity to preserve realistic dynamic characteristics. Spectral matching adjusts both amplitude and phase—often using wavelet transforms or time-domain filters—to achieve a closer fit to the target spectrum over a broader period range (e.g., 0.2T to 1.5T, where T is the structure's period), reducing bias in estimated demands while maintaining the record's duration and nonstationarity. Limits on frequency content alteration prevent unrealistic modifications, with amplitude scaling preferred for simplicity and spectral matching for higher fidelity in critical applications.[51][52] Probabilistic targets for scaling guide the process to reflect site-specific hazards accurately. The uniform hazard spectrum (UHS) provides a conservative target where spectral ordinates have the same exceedance probability, derived from PSHA deaggregation to select records from compatible magnitude-distance bins. For more refined representation, the conditional mean spectrum (CMS) conditions the target on a spectral acceleration at the structure's period while computing mean values at other periods, accounting for spectral shape correlations and reducing overestimation of demands compared to the UHS. This approach, particularly useful for performance-based design, ensures selected and scaled records reflect the expected distribution of ground motions given a conditioning event.[53][54]

Static Analysis Methods

Equivalent Static Analysis

Equivalent static analysis, also known as the equivalent lateral force procedure, is a simplified method for estimating seismic demands on structures by converting the dynamic effects of earthquake ground motions into a set of static lateral forces applied to the building. This approach originated over a century ago in early seismic regulations, where structures were designed for lateral forces equivalent to approximately 10% of the building weight, reflecting a rudimentary understanding of inertial forces during earthquakes.[55] It has been a cornerstone of seismic design codes due to its historical use in providing conservative estimates for basic structural configurations.[55] The core of the procedure involves calculating the total base shear $ V $, which represents the seismic force at the foundation level, using the formula $ V = C_s W $, where $ C_s $ is the seismic response coefficient derived from design response spectra in building codes, and $ W $ is the effective seismic weight of the structure, typically comprising the dead load plus a portion of the live load.[56] This base shear is then distributed vertically along the height of the building to determine the lateral forces at each level. The method assumes linear elastic behavior and is particularly suited for preliminary design or when computational resources are limited. Vertical distribution of the base shear follows an inverted triangular load pattern, where forces are higher at the upper levels to approximate the first-mode response of the structure. The force at level $ x $, denoted $ F_x $, is given by $ F_x = \frac{w_x h_x^k}{\sum w_i h_i^k} V $, with $ w_x $ and $ h_x $ as the effective seismic weight and height at level $ x $, respectively, and the exponent $ k $ varying by fundamental period $ T $: $ k = 1 $ for $ T \leq 0.5 $ seconds (linear distribution), linearly interpolating to $ k = 2 $ for $ T \geq 2.5 $ seconds (parabolic distribution).[56] This distribution factor accounts for the increasing moment arm with height, emphasizing forces in taller portions of the structure. In modern codes such as ASCE/SEI 7-22, the seismic response coefficient $ C_s $ is primarily calculated as $ C_s = \frac{S_{DS}}{(R / I_e)} $, where $ S_{DS} $ is the design spectral acceleration for short periods, $ R $ is the response modification factor reflecting the structure's ductility and overstrength, and $ I_e $ is the importance factor.[56] Period-dependent reductions apply, capping $ C_s $ at $ S_{D1} / (R / I_e) $ for longer periods and ensuring a minimum value based on site-specific parameters. The method is limited to regular structures without significant irregularities, with the fundamental period $ T $ not exceeding $ 3.5 T_S $, where $ T_S $ is the long-period transition period from the site response spectrum.[56] The equivalent static analysis relies on key assumptions, including an inverted triangular force pattern that simulates the fundamental mode shape for shear buildings and applicability to low-rise, rigid structures where higher-mode effects are minimal. Its primary advantages include computational simplicity, requiring only static equilibrium checks without time-history integrations, making it ideal for hand calculations or early-stage assessments in design practice. Historically, this method formed the basis of seismic provisions in early 20th-century codes, evolving to incorporate spectral-based coefficients while retaining its role for simpler applications.[55]

Nonlinear Static Analysis

Nonlinear static analysis, commonly referred to as pushover analysis, evaluates the seismic performance of structures by applying incrementally increasing lateral loads to a nonlinear model, simulating the effects of yielding and ductility under monotonic loading. This method generates a capacity curve representing the relationship between base shear and roof displacement, providing insight into the structure's nonlinear response and potential failure mechanisms. Unlike linear static approaches, it accounts for material and geometric nonlinearities, making it suitable for performance-based seismic design where inelastic behavior is expected.[57][58] The procedure begins with modeling the structure as a multi-degree-of-freedom (MDOF) system, incorporating gravity loads and applying lateral forces in a predefined pattern, such as an inverted triangular or uniform distribution, until a target displacement or collapse is reached. The resulting pushover curve is often idealized into bilinear or multilinear segments to facilitate analysis, with the effective fundamental period $ T_e $ and participation factor derived from the structure's first-mode shape. To estimate the target displacement $ \delta_t $, the capacity curve is converted to an equivalent single-degree-of-freedom (SDOF) system, using the formula
δt=C0C1C2C3SaTe24π2g, \delta_t = C_0 C_1 C_2 C_3 S_a \frac{T_e^2}{4\pi^2 g},

where $ S_a $ is the spectral acceleration, $ g $ is gravitational acceleration, $ C_0 $ accounts for the MDOF-to-SDOF transformation (typically 1.0 to 1.2), $ C_1 $ modifies for inelastic displacement amplification (often 1.0 to 1.5 based on ductility and period), $ C_2 $ adjusts for hysteretic degradation (around 1.0 for non-degrading systems), and $ C_3 $ incorporates P-Δ effects (close to 1.0 for stable post-yield behavior). This approach, refined in FEMA 440 from earlier ATC-40 guidelines, enables estimation of maximum roof displacement without full dynamic simulation.[57][58]
Nonlinear behavior is modeled using lumped plasticity elements, concentrating inelastic deformations at plastic hinges located at beam-column joints or member ends, with lengths typically spanning 0.5 to 1.0 times the member depth. These hinges are defined by moment-rotation or force-deformation relationships, such as bilinear (elastic-perfectly plastic) or trilinear (with post-yield hardening or degradation) curves, calibrated from experimental data to capture energy dissipation and stiffness reduction. Acceptance criteria for hinge rotations are tied to performance levels, like life safety or collapse prevention, ensuring the model reflects realistic ductile capacity.[57][58] For demand estimation, the pushover capacity curve is transformed into the Acceleration-Displacement Response Spectrum (ADRS) format, plotting spectral acceleration against spectral displacement with lines of constant period for guidance. The seismic demand spectrum, reduced for effective damping based on ductility, is overlaid to form a capacity-demand diagram; the performance point is identified at their intersection, representing the anticipated inelastic displacement and acceleration under design ground motions. This graphical or iterative process highlights the structure's reserve capacity and guides retrofit decisions.[57][58] Despite its practicality, nonlinear static analysis has limitations, as it assumes the response is dominated by the first mode and applies invariant loading patterns, thereby neglecting higher-mode contributions and the time-varying nature of earthquake excitations. These simplifications can lead to inaccuracies in estimating interstory drifts or responses in irregular or torsionally sensitive structures, where dynamic effects may amplify demands beyond static predictions.[57][58]

Dynamic Analysis Methods

Response Spectrum Analysis

Response spectrum analysis is a dynamic method employed in seismic engineering to estimate the maximum expected responses of linear elastic multi-degree-of-freedom (MDOF) structures under earthquake loading by utilizing the response spectrum derived from ground motion characteristics. This approach leverages modal superposition in the frequency domain, avoiding the need for time-domain simulations, and provides peak values of displacements, forces, and accelerations for design purposes. Unlike simpler static methods, it accounts for the dynamic interaction of multiple vibration modes, offering a more accurate representation of structural behavior for taller or irregular buildings. The procedure begins with modal analysis of the structure to determine its natural frequencies $ \omega_r $ and corresponding mode shapes $ \phi_r $ for the first several modes, typically using the eigenvalue problem solved from the undamped equations of motion. For each mode $ r $, the pseudo-acceleration response from the design spectrum $ S_a(\omega_r) $ is obtained, serving as input analogous to ground motion spectra. The modal base shear $ V_r $ is then calculated as $ V_r = \left( \phi_r^T M \mathbf{1} \right) S_a(\omega_r) $, where $ M $ is the mass matrix and $ \mathbf{1} $ is a vector of ones representing uniform mass distribution, assuming mass-orthonormal modes ($ \phi_r^T M \phi_r = 1 $). Similar expressions yield modal responses for displacements and other quantities. These modal contributions are combined to obtain the total peak response using methods such as the square root of the sum of squares (SRSS), which assumes uncorrelated modes for well-separated frequencies, or more advanced techniques for closely spaced modes.[59] A widely adopted combination rule for correlated modes is the complete quadratic combination (CQC) method, which computes the variance of the total response as $ \sigma^2 = \sum_i \sum_j \rho_{ij} \sigma_i \sigma_j $, where $ \sigma_i $ and $ \sigma_j $ are the standard deviations of the modal responses, and $ \rho_{ij} $ is the modal correlation coefficient accounting for the closeness of frequencies and damping. The correlation coefficient is typically given by $ \rho_{ij} = \frac{8 \zeta^2 (1 + r) r^{3/2}}{(1 - r^2)^2 + 4 \zeta^2 r (1 + r^2)} $, with $ r = \omega_i / \omega_j $ and $ \zeta $ as the damping ratio, ensuring realistic estimates for structures with closely spaced modes. The peak response is then derived by applying a peak factor to $ \sigma $. This method improves upon SRSS by reducing underestimation errors in modal summation.[60] The method is applicable to linear elastic MDOF systems under the assumption of proportional damping and small deformations, requiring inclusion of sufficient modes to achieve at least 90% mass participation in each principal direction to ensure accurate capture of the structural response. If mass participation falls below this threshold, additional modes or alternative analyses may be necessary. In code implementation, such as in ASCE 7, the computed base shear from response spectrum analysis is scaled if it is less than 85% of the equivalent static base shear, and overstrength factors $ \Omega_0 $ (ranging from 2 to 3 depending on the seismic force-resisting system) are applied to amplify forces for designing specific elements like columns to prevent undesirable failure mechanisms. Scaling factors incorporate site-specific spectral accelerations and importance factors to align with performance objectives.

Linear Time-History Analysis

Linear time-history analysis involves the direct numerical integration of the equations of motion for a linearly elastic structure subjected to earthquake ground accelerations, providing detailed time-varying responses such as displacements, velocities, and accelerations at all degrees of freedom.[61] This method solves the governing equation Mu¨(t)+Cu˙(t)+Ku(t)=M1u¨g(t)\mathbf{M} \ddot{\mathbf{u}}(t) + \mathbf{C} \dot{\mathbf{u}}(t) + \mathbf{K} \mathbf{u}(t) = -\mathbf{M} \mathbf{1} \ddot{u}_g(t) in a step-by-step manner over the duration of the ground motion, where M\mathbf{M}, C\mathbf{C}, and K\mathbf{K} are the mass, damping, and stiffness matrices, u(t)\mathbf{u}(t), u˙(t)\dot{\mathbf{u}}(t), and u¨(t)\ddot{\mathbf{u}}(t) are the displacement, velocity, and acceleration vectors, 1\mathbf{1} is a vector of ones, and u¨g(t)\ddot{u}_g(t) is the ground acceleration time history.[62] A common numerical integration scheme is the Newmark-β method with parameters β = 0.25 and γ = 0.5, corresponding to the average acceleration assumption, which ensures unconditional stability and second-order accuracy for linear systems.[62] The analysis yields complete time histories of key response quantities, including nodal displacements, inter-story drifts, member forces, and support reactions, allowing engineers to evaluate peak values and temporal variations critical for detailing and serviceability checks.[61] To account for the variability in earthquake ground motions, multiple record sets—typically at least three and preferably seven or more pairs—are analyzed, with results statistically combined; the average of the maximum responses across the set is used when seven or more pairs are employed, while the maximum value governs for fewer than seven.[63] Damping is specified as 5% of critical for the first two modes, influencing the viscous damping matrix C\mathbf{C}.[61] Compared to response spectrum analysis, linear time-history analysis captures higher-mode interactions and phase relationships more accurately, as it provides full time-series outputs rather than just modal peak values, and includes linearized P-delta effects through iterative stiffness updates within each time step.[64] Ground motion records are selected and scaled to match the design response spectrum, ensuring representation of site-specific seismicity.[61]

Nonlinear Time-History Analysis

Nonlinear time-history analysis (NLTHA) simulates the dynamic response of structures to earthquake ground motions by incorporating material and geometric nonlinearities, allowing for the prediction of inelastic behavior such as yielding, stiffness degradation, and energy dissipation under cyclic loading. Unlike linear time-history analysis, which assumes elastic response, NLTHA captures cycle-dependent effects critical for performance-based seismic design, enabling assessment of damage progression and collapse potential. This method involves solving the equations of motion using selected ground motion records scaled to site-specific hazard levels, typically requiring multiple analyses to account for record-to-record variability.[2] In modeling structural components for NLTHA, fiber beam-column elements are widely used to represent distributed plasticity along member lengths and cross-sections, discretizing sections into uniaxial fibers with nonlinear stress-strain relationships for concrete, steel, and reinforcement to capture axial-flexural interactions and progressive yielding. These elements, often force-based formulations, ensure equilibrium and compatibility while integrating material nonlinearity at integration points, providing accurate hysteretic behavior for reinforced concrete and steel frames under seismic loads. For time integration, implicit schemes like the Hilber-Hughes-Taylor (HHT) alpha method are employed, which introduce controlled numerical dissipation to damp high-frequency modes without sacrificing second-order accuracy, using parameters such as alpha (typically -0.05 to -0.3) to balance stability and precision in nonlinear dynamic simulations. Explicit methods may be used for highly discontinuous responses, but implicit approaches dominate due to unconditional stability. Convergence in nonlinear solvers, such as Newton-Raphson iterations, relies on criteria like force residual tolerances (e.g., 0.001 times initial stiffness) and displacement increments (e.g., 0.01 times element length), ensuring reliable solutions amid path-dependent nonlinearity.[2][65][66][67] Hysteretic rules in NLTHA models incorporate smooth degradation and pinching effects to replicate observed experimental behavior, with the Bouc-Wen model serving as a phenomenological differential equation framework for capturing bilinear or trilinear loops with stiffness and strength deterioration under repeated cycles. The model defines restoring force as a sum of elastic, plastic, and hysteretic components, parameterized by factors controlling smoothness (n ≈ 1-2), degradation (ν < 1), and pinching (γ ≈ 0.5-1), enabling simulation of pinching from crack closure and isotropic/kinematic hardening in reinforced concrete elements. These rules are calibrated against quasi-static tests to ensure realistic energy dissipation without excessive computational overhead.[68] Output from NLTHA is interpreted through metrics like incremental dynamic analysis (IDA) curves, which plot engineering demand parameters—such as maximum interstory drift ratio—against ground motion intensity measures (e.g., spectral acceleration at the fundamental period), revealing the structure's reserve capacity and limit states from elastic to collapse. For instance, IDA traces for a mid-rise frame might show drifts escalating from 1% at low intensities to 10% near collapse, with the 16th-84th percentile bounds quantifying variability. Collapse fragility functions, derived from IDA results via lognormal fitting, estimate the probability of collapse as a function of intensity (e.g., median collapse capacity of 2g spectral acceleration with logarithmic standard deviation σ ≈ 0.4-0.6), incorporating record-to-record and modeling uncertainties to inform risk assessment in performance-based frameworks.[69][70] NLTHA imposes high computational demands, particularly for three-dimensional models of complex structures, where simulations can require hours to days on multi-core systems due to iterative solving of large stiffness matrices and numerous time steps (e.g., Δt = 0.005-0.01s for accuracy). Validation against experimental data, such as shake-table tests or cyclic loading protocols from databases like PEER's Structural Performance Database, is essential to confirm model fidelity, with discrepancies in hysteretic loops limited to 10-20% for acceptance. Despite the intensity, NLTHA's precision justifies its use for critical facilities, often augmented by reduced-order modeling to manage demands.[2][71]

Applications and Considerations

Soil-Structure Interaction

Soil-structure interaction (SSI) refers to the dynamic coupling between a structure, its foundation, and the supporting soil during seismic events, which modifies the input ground motions and alters structural demands compared to rigid base assumptions. This interaction arises because flexible foundations filter and amplify seismic waves, leading to changes in the effective period, damping, and base shear of the structure. Accounting for SSI is essential for accurate seismic analysis, particularly for structures on soft or deep soils, where ignoring it can overestimate stiffness and underestimate displacements.[72] Kinematic interaction effects occur as seismic waves propagate through the soil and interact with the foundation geometry, modifying the free-field motion at the base. Base slab averaging reduces the input motion for embedded foundations by smoothing spatial variations in the incident waves, while embedment effects further attenuate higher-frequency components due to wave scattering and partial shielding. These modifications typically result in a filtered input spectrum with reduced accelerations for stiff structures on soft soils, though the impact is modest for shallow foundations, often less than 20% reduction in peak ground acceleration. Empirical studies from instrumented buildings confirm that kinematic effects are more pronounced for embedded foundations in soft soils, lowering the effective input by up to 30% in some cases.[73][72] Inertial interaction stems from the foundation's response to the inertial forces transmitted from the vibrating superstructure, causing foundation translation and rocking that comply with the soil's flexibility. This leads to an increase in the system's natural period—often by 20-50% for typical buildings on soft soils—and enhanced damping through radiation of waves into the soil, reducing peak responses. The soil's dynamic stiffness and damping are represented by frequency-dependent impedance functions, formulated as spring-dashpot matrices; for example, Wolf's impedance functions provide analytical expressions for rigid foundations on homogeneous half-spaces, accounting for vertical, horizontal, rocking, and torsional modes. These functions reveal that rocking stiffness decreases with frequency, promoting larger rotations on compliant soils and thereby mitigating structural demands.[74][72] Common methods for incorporating SSI in seismic analysis include substructuring approaches, which partition the soil-foundation system into near-field and far-field components for efficient computation. In the frequency domain, transfer functions couple the foundation impedances to the structural response, enabling convolution with input motions to obtain time histories. For nonlinear soil behavior, time-domain cone models approximate the radiation damping and stiffness using equivalent conical frusta to represent wave propagation in layered media, allowing integration with nonlinear structural analyses without frequency transformations. These models capture hysteretic damping in soft soils during strong shaking, where soil yielding further lengthens periods and dissipates energy.[75][76][72] SSI effects are particularly significant on soft soils with shear wave velocities (Vs) below 200 m/s, where amplification of long-period motions can increase structural displacements by factors of 1.5 to 2 compared to rigid base assumptions, exacerbating demands on tall or flexible structures. Building codes address this through site-specific adjustments; for instance, ASCE 7 classifies sites with Vs < 180 m/s or susceptible to liquefaction as Site Class F, requiring site-response analyses that incorporate SSI to derive modified design spectra and foundation load factors. These provisions ensure that kinematic and inertial effects are evaluated, often resulting in reduced base shears but increased foundation rotations for such sites.[72][77]

Performance-Based Seismic Design

Performance-based seismic design (PBSD) represents a paradigm shift from traditional prescriptive building codes, which focus on uniform strength requirements, to a methodology that explicitly defines and verifies multiple performance objectives for structures under various earthquake intensities. This approach allows engineers to tailor designs to specific owner needs, such as minimizing downtime or repair costs, while ensuring life safety. Originating in the mid-1990s, PBSD evolved from efforts to address limitations in code-based designs exposed by events like the 1994 Northridge earthquake, with foundational guidelines outlined in the Structural Engineers Association of California's (SEAOC) Vision 2000 report, which proposed a framework linking seismic hazards to discrete building performance levels.[78] Central to PBSD are defined performance levels, including Immediate Occupancy (IO), where the structure remains safe for immediate use with minimal damage; Life Safety (LS), ensuring protection against collapse and serious injury; and Collapse Prevention (CP), limiting damage to avoid total failure but allowing significant repairs. These levels are tied to seismic hazard intensities, such as the Maximum Considered Earthquake (MCER) with a 2% probability of exceedance in 50 years, representing a rare, high-intensity event. The design procedure begins by establishing target structural responses, such as interstory drift ratios or floor accelerations, aligned with the selected performance objectives for specific hazard levels; verification typically involves nonlinear analysis methods to simulate demands and assess compliance, with tools like FEMA P-58 enabling probabilistic loss estimation for casualties, repair costs, and downtime.[79][80] Probabilistic elements enhance PBSD's reliability, incorporating uncertainty in ground motions and structural response. The Collapse Margin Ratio (CMR), defined as the ratio of the median spectral acceleration causing collapse to the MCER intensity, quantifies a structure's margin against collapse, with acceptability thresholds established in FEMA P-695 guidelines to achieve uniform risk. Additionally, updates from the U.S. Geological Survey (USGS) in the 2010s introduced risk-targeted ground motions, adjusting spectral accelerations to ensure a consistent 1% probability of collapse across the U.S. in 50 years, rather than uniform hazard probabilities. The Applied Technology Council (ATC-58) project, culminating in FEMA P-58 methodologies during the 2010s, refined these procedures for practical implementation, emphasizing multi-hazard performance assessment.[81][82]

Software Tools and Standards

Several commercial and open-source software tools are widely used for seismic analysis, enabling engineers to model and simulate structural responses under earthquake loading. ETABS, developed by Computers and Structures, Inc. (CSI), is a specialized tool for the integrated analysis and design of building structures, supporting linear and nonlinear static and dynamic analyses, including response spectrum and time-history methods.[83] SAP2000, also from CSI, offers versatile general-purpose finite element analysis for a broader range of structures, handling linear and nonlinear dynamic simulations with advanced features for seismic loading. PERFORM-3D, another CSI product, focuses on performance-based seismic design, providing capabilities for nonlinear pushover analysis and incremental dynamic analysis (IDA) to evaluate structural capacity against earthquake demands.[84] OpenSees, an open-source framework from the University of California, Berkeley, is particularly valued in research for its scripting flexibility in simulating complex structural and geotechnical systems under earthquakes, supporting custom nonlinear dynamic analyses through Tcl/Python interfaces.[85] These tools implement various analysis methods, such as equivalent static and nonlinear time-history approaches, but differ in user interface and cost: commercial options like ETABS and SAP2000 provide intuitive graphical environments and integrated design checks, while OpenSees emphasizes modularity for academic validation at no licensing expense.[86] Trade-offs include commercial software's ease of use for practical engineering versus open-source tools' adaptability for bespoke research models. Regulatory standards provide the framework for seismic analysis, ensuring consistent safety levels across regions. In the United States, ASCE 7-22, published by the American Society of Civil Engineers, outlines minimum design loads including seismic provisions based on site-specific hazard maps and response modification factors for various structural systems.[87] Eurocode 8, part of the European standards suite, governs the design of earthquake-resistant structures, emphasizing ductility classes and capacity design principles for buildings and bridges in seismic zones. New Zealand's NZS 1170.5 specifies earthquake actions using spectral accelerations derived from national hazard models, applicable to structural design in high-seismicity areas. Post-2020 updates to these standards incorporate refined ground motion predictions and enhanced provisions for tall buildings based on research up to 2022. ASCE 7-22 introduces revised risk-targeted seismic hazard maps and updated site coefficients to better account for long-period motions in urban high-rises.[88] Proposed revisions in the second-generation Eurocode 8, under development with expected publication by 2028, emphasize near-fault effects and soil-structure interaction for critical infrastructure.[89] In New Zealand, the draft Technical Specification TS 1170.5:2025, released for public comment in 2024 and planned for gradual implementation as of November 2025, integrates the 2022 National Seismic Hazard Model (NSHM2022), proposing increases in design accelerations in some regions by up to 50% and adding provisions for taller structures based on post-2016 Kaikōura earthquake data.[90] Validation of seismic analysis software relies on benchmarking against experimental data, particularly shake-table tests, to verify model accuracy in capturing dynamic responses. For instance, nonlinear simulations in PERFORM-3D and OpenSees have been calibrated against large-scale shake-table experiments on reinforced concrete frames, showing agreement within 10-15% for peak drifts and base shears when material nonlinearity is properly modeled.[91] Comprehensive benchmarking studies compare multiple packages, like ETABS and SeismoStruct, on the same test structures, highlighting OpenSees' strength in handling soil-structure interaction while commercial tools excel in automated code compliance checks.[92] Emerging trends in the 2020s leverage artificial intelligence (AI) and machine learning (ML) for surrogate modeling, accelerating seismic analysis by approximating expensive nonlinear simulations. ML-based surrogates, trained on finite element outputs from tools like OpenSees, reduce computation time for fragility curve generation by orders of magnitude, enabling rapid probabilistic assessments in performance-based design.[93] Recent developments include explainable AI models that predict structural responses under varying hazards, validated against historical earthquake data, to support real-time decision-making in high-rise evaluations.[94]

References

User Avatar
No comments yet.